9

Assessing Facilitation Quality

What’s Inside This Chapter

• How to ask learners for feedback on facilitation quality

• The observation methods used to obtain feedback

• How to analyze performance on activities, tests, and assessments to obtain feedback

• Level 1 and level 2 evaluations for facilitator feedback

It’s late afternoon of the first day of a two-day program and you want to know how you’re doing. It is natural—and professional—to want to know how things are going. After all, it’s your responsibility to make learning happen.

So, is it happening? How can you find out?

Kirkpatrick (1994) indicates that there are four levels of evaluation: reaction (Level 1), learning (Level 2), behavior (Level 3), and results (Level 4). For facilitators, Levels 1 and 2 provide feedback within the learning experience. Level 1 is a measure of learner satisfaction; that is, how participants react to their experience. Level 2 addresses whether learning took place. Was there a knowledge shift? As a result of the learning, did skills increase? Can the learner demonstrate the new behaviors in the learning experience? You can develop instruments to solicit feedback and assess whether learning took place. These instruments are part of the design and development process.

Noted

Effective facilitators are less concerned about whether the learners are having a good time than about whether learning is happening. Fun does not equate learning or application to the job. Feedback should be sought to ensure that learning is always foremost, but the facilitator must keep in mind the limitations and strengths of the various ways of eliciting feedback from the learners.

Asking for Feedback

A facilitator can ask the group or individual learners for verbal or written feedback. When you ask learners for feedback, you are making some assumptions:

• They are willing to give feedback.

• They know enough about facilitation to provide meaningful feedback.

• You know how to ask or frame the questions that will provide meaningful feedback.

• You are willing to make changes based on the feedback.

• The feedback received is representative of a larger group of learners participating in this delivery.

Basic Rule 42

If you are not going to act on the feedback, do not solicit it.

When giving verbal feedback, learners often have a difficult time providing honest, constructive feedback. There is group pressure not to be too critical, and when asked, they may respond, “Things are going fine,” or “Everything is going great.”

In some cases, they point out things beyond your control, such as “PowerPoint slides need to be more creative,” “Use a different color or larger text on the slides,” “Guides have misspelled words,” or “Content is not relevant to my job.” These issues are important because they influence the learning experience, but it doesn’t help you become a better facilitator. Take this feedback to the course designers so they can make the appropriate changes. You want to make sure to have these things fixed so you always have a quality learning program. You may also want to ask those in IT who support the course for any feedback and act accordingly on that information.

Basic Rule 43

In general, verbal feedback does not provide reliable or useful information.

Here’s an example of an organization that asks for verbal feedback for the end-of-course evaluation. Learners are asked to rate the course from 1 to 10 (with 10 being the highest), and they are also asked to give specific comments. The facilitator always starts with a learner who seems to be having a good experience. This learner’s answer usually nets a high rating and positive comments. As subsequent learners respond individually, the peer pressure to maintain the comments and rating is significant. Does this evaluation provide good and reliable information? No! Does it result in good ratings for the facilitators and program? Yes! Does it produce valuable feedback? No!

Verbal feedback is further complicated by your role as facilitator, which is one of perceived influence and power. This misperception limits your ability to get honest feedback. In short, learners don’t want to say anything that may result in a bad perception of them. (By the way, the amount of influence and power learners think facilitators have with the learners’ managers is surprising. Would that it were so!)

By using a form of written, anonymous feedback, you improve your chances of getting reliable information, but not by much. If questions are open ended, you must frame them so there is little room for interpretation; however, such questions are difficult to write. Once the feedback is received, you must then read each survey and do some kind of thematic analysis to see where things are going well and where changes are required. If you develop a scaled response instrument (statements with a rating scale, say, from 1 to 4 with a descriptor for each rating number), you must contend with rater bias (learners’ tendency to rate high) and still identify the areas that are going well and not so well. And, unless you add places for comments, this approach provides only a relative score, which you must then interpret and use to determine what actions to take. For the online and virtual classroom, end-of-course forms can be completed and sent back electronically. Forms with the rating scales can be summarized and tabulated very quickly; however, like a more traditional paper form, you will still have to determine how to interpret the results.

Noted

In many cases, the course designer or developer will provide you with an interim evaluation instrument that you can use to gather feedback.

Observation

Peer observation is a good method for you to get feedback on your facilitation skills. You can use the facilitation and presentation assessment instrument (Exercise 3-2) and ask a trusted peer to observe part of the course you’re facilitating. You can also ask a peer to join a virtual course for observation. Because her name will appear on the class roster, you will want to explain to the group why a new participant has joined. If you use this approach, remember these things:

• The peer evaluator must be a high-performing facilitator.

• The peer evaluator must be familiar with the evaluation instrument.

• The peer evaluator must have your developmental interests in mind.

• You must have confidence in your peer’s evaluation of you.

• You must not facilitate differently when the peer evaluator is observing you.

• You must be willing to use the feedback.

Think About This

Some training rooms have an observation booth where others can unobtrusively view the facilitator and learners. This feature provides an excellent opportunity to reduce the bias that would be introduced if you tend to perform differently when you know you’re being observed.

Another option is to videotape or record your facilitation. You can tape or record the entire learning event or particular modules you are concerned about. You then have the option of reviewing the video or recording (which includes the media) by yourself or with a trusted peer. You could also combine this activity with the use of an objective instrument (such as the one in Exercise 3-2) to help carry out a complete review with less subjectivity.

Level 1 Evaluation

End-of-course evaluations are seen by some as “smile sheets” that provide little value. Others see them as an opportunity to gain insights into program and facilitator strengths and development and revision areas. The factors making the Level 1 evaluation more usable include:

• Is it customized to the course and particular delivery?

• Are the learning objectives included?

• Is there sufficient detail to make a decision regarding program content, facilitator skills, logistics, technology, and so forth?

• Is the information used for feedback to the facilitator, program administrator, or logistical coordinator?

• Is there a separate section relating to the facilitator’s skills?

• Are the facilitator’s technical skills (use of technology) assessed?

• Are the facilitator’s feedback skills and practices assessed?

• Are all areas of the instrument relevant?

All of these issues affect the relevance and use of the Level 1 evaluation instrument, which can provide feedback to the facilitator. First, the questions concerning course learning objectives tell you the extent to which the learners say the course objectives were met. This assumes that the instrument uses a scaled response type. If a large proportion of the learners respond that certain objectives were not met, this indicates a lapse in content or facilitation of the content.

Some research can tell you which of these factors need to be addressed. Many times as facilitators get behind schedule they make up time by skimming through content. In other cases, the content was more difficult for the facilitator and therefore not as well presented to the learners.

Basic Rule 44

Learner feedback on how well course objectives were met provides insights into the quality of facilitation.

The other critical area on the Level 1 evaluation for you as a facilitator is the section dealing directly with your capabilities. Here are several areas related to facilitators that you will find on many Level 1 instruments:

• promoted an environment of learning

• presented clearly to assist the participants’ understanding

• demonstrated knowledge of the subject matter

• provided feedback effectively (complete and timely) to participants

• responded well to questions and in a timely manner for online learners

• presented content in an appropriate sequence

• promoted participant discussion and involvement

• kept the discussion on topic and activities on track; helped online learners stay on schedule

• coached participants on learning activities

• used technology to enhance learning

• provided detailed instructions for online learners

• was available to online learners for coaching or clarification of assignments or course content.

As you can see, specific feedback in this level of detail can provide specific areas for facilitator development or areas of strength to be maintained. The intent of feedback is to identify areas to hone your skills. This includes not only areas for development, but also areas in which you perform well, so you can continue to strengthen those skills.

Think About This

The use of Level 1 evaluation is delivery specific. Content, design, or facilitation changes should not be made based on the feedback for only one or two deliveries. Instead, collect Level 1 evaluation data for several courses you facilitate and look for trends. Gathering more information increases reliability for decision making.

Noted

Other areas that are addressed in many Level 1 evaluations include course content (of which objectives are a part), course methodology, environment, and course administration. Many Level 1 instruments also have a place for general comments and an item asking whether the learner would recommend the course to a peer.

Level 2 Evaluation

Level 2 evaluation depends on activities, tests, and assessments to evaluate whether participants have learned (shift in knowledge) and can demonstrate the skill or behavior within the learning experience. Level 2 is relevant to you because as a facilitator you are responsible for making sure learning happens.

Performance on Activities

Learners’ performance on learning activities can provide you with feedback as to how well you are explaining the course content or instructions.

If learners are struggling with providing accurate and complete responses to the activities, it may be a sign that the content was not covered adequately. If there is a content issue, you may need to revisit that part of the material to ensure that adequate learning takes place.

If they are asking questions regarding the meaning or application of the content, there is the real possibility that they lack the knowledge or skill to complete the activity. Likewise, if several learners are asking questions about what is expected of them, the instructions were probably not adequately explained or posted. These responses are feedback on your facilitation.

Performance on Tests

The learners’ performance on knowledge tests is a clear indication of the extent that they are grasping the material. By doing a quick item analysis, you can isolate the content area where learners have the most difficulty. Could it be a test question format? Yes, it could. Could it be that the course content was not included in the leader’s guide? Yes, it could. Could it be that the facilitator did not adequately cover, explain, or teach the content? Yes, it could. In any case, the learners have not acquired the content. Your job is to determine the source of the problem and correct it.

Knowledge tests in the online and virtual environments can be quickly given and graded, even providing an item analysis.

Performance on Assessments

Observation with supporting checklists can also be used to evaluate learners’ performance on case studies, role plays, and other assessments. In the online and virtual environments role plays can be assessed much like in a face-to-face classroom if learners have webcams. Role plays can also be videotaped for later analysis, but this causes unnecessary time delays. The important thing is to find out where the areas of difficulty are and what the causal factor is. Then, address the cause.

Noted

Item analysis is a method by which you examine each item on the test or instrument and count the number of correct and incorrect answers to that item. For example, for a multiple choice test where there are 20 learners and the answer to question 5 is A, you count the number who responded A and the number responding B, C, or D. If several learners miss the same question or provide the same wrong answer, this indicates that you need to give more attention to that the content area.

Getting It Done

Developing ways to collect and identify feedback is a skill unto itself. Use Exercise 9-1 to identify some items on an evaluation instrument that you might use to collect feedback on your facilitation skills.

Exercise 9–1. Selecting Elements for a Level 1 Evaluation Instrument

Below is a sample Level 1 evaluation instrument. Highlight the sections you think are important and would like to use for your Level 1 evaluation.

Now, turn to Exercise 9-2 to create your own evaluation instrument and enhance your ability to design ways to collect and use feedback on your facilitation skills.

Exercise 9–2. Developing an Evaluation Instrument

In the space below, try your hand at developing a written instrument to solicit feedback from learners. Assume that you are about halfway through a three-day program. Be sure to cover such topics as pace, sequence of the content, skill and knowledge acquisition and application, value of activities, various aspects of your skills as a facilitator, and so forth.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset