7
What Is Expected Must Be Inspected: Assessing and Evaluating Hands‐On Learning

One summer, as a teenager, I worked with several other young people to help restore an old farmhouse into what would become a youth camp. Our supervisor, Mr Abuhl, was an older gentleman who had volunteered his time and talents and was intent on improving our renovation skills.

Many people were surprised at the quality of work Mr Abuhl was able to produce with a few 15–17‐year‐old teenagers. Those of us that worked for him were not surprised at all. His success came, not by cajoling more work out of us, but by constantly evaluating the work that we did. He would meticulously inspect our work to confirm that we were meeting his stringent (at least from a teenager’s perspective) standards. “Work that is expected must be inspected,” he would say. It was his way of telling us that his re‐measurements and careful scrutiny of our work were not because he didn’t trust us, but because detecting small deviations now could help prevent larger problems later on.

The same is true with training. Whenever you provide learning, you must verify that the training did what you intended it to do. If you don’t, no one may notice for a while, but soon your training will be deemed ineffective. Even if it isn’t and your training is amazingly effective, or generates tremendous revenue, few will care unless you can prove it.

In the 1950s, Donald Kirkpatrick suggested four areas that must be evaluated in any training program. They have become known as Kirkpatrick’s evaluation model.1 First, trainers should measure the student’s reaction to the training. The traditional way many do that is through what we sometimes refer to as smile sheets or surveys done right after class to see if students liked the training. Second, you should determine if the individual students can meet the objectives of the course. The traditional way to conclude that is through a written test or an exam. Third, he suggests that instructors must measure behavior, or the change in their ability to do something specific. The traditional way to do this is through follow‐up and on‐the‐job analysis after the training. Finally, trainers should validate that the class itself is getting the business results you want from the class. The traditional way to that is by demonstrating that you brought in more money than you spent.

Because proficiency training must include a change in behavior, or no real learning has occurred, I will combine levels 2 and 3. You must evaluate the course (level 1), you must assess the individual (levels 2 and 3), and you must measure the value obtained (level 4). All three are important to validate success.

It is a mistake to assume the Four‐Level Training Evaluation Model is a sequential evaluation model. There are four different things being measured, but they don’t always happen in that order. As I stated earlier, you must start with the end goal in mind. Also, you may assess the individual during a class and take a survey after the class. You may know what the business value is even before the students have time to demonstrate a change in behavior.

Earlier chapters dealt with evaluating the business goal of the class. This chapter deals with evaluating the class and assessing the individuals.

Assessing the Individual

In product training, there are two main types of individual assessments that you should consider: you should assess the student’s overall knowledge and you should assess their skill level. Both are important in order to determine if you have increased your students’ proficiency on your product. This is especially true if there is going to be any kind of proficiency certification granted.

Assessing Their Knowledge

Quizzes and exams are the most common way to assess for general knowledge. Both can be helpful to determine if students were able to process the material that you covered. Asking good test questions is a skill that may require seeking additional help. There are many resources available for learning how to write good quiz and exam questions. Some tools, like Questionmark.com, offer whitepapers and other helpful tools to create effective exam questions. This chapter is not a comprehensive guide to creating a quality quiz or exam.

Quizzes

Quizzes should be short and generally follow the module or lesson immediately, often using the same wording used in the lesson. The goal of a quiz is to make sure that students are keeping up with the information flow. Quizzes help you, the instructor, know if you need to cover something again, if you need to slow down, or if your students are with you or not. Quizzes can serve multiple purposes, like being a means of student engagement or preparing students for a later exam. Quizzes can even be given merely to take attendance. Whatever the reason, you should know what it is.

When you ask a quiz question, you should always review the answers with the class. Make sure that all of the students know the correct answer before moving on. I rarely grade a quiz, though students may see a similar question in the exam.

One way to include regular feedback, or quizzing of the students, is by using an electronic polling tool. A good polling tool or mobile application allows for all students to answer questions. Using these types of engaging tools will increase your effectiveness by adding interaction with all of your students. Another benefit is that it will force you to slow down and ask more questions.

Exams

Exams, or tests, are a more formal method of measuring a student’s comprehension. Remember that some students are better test takers than others. Also, if you don’t carefully evaluate your questions or requirements, there may be little to no correlation between a student’s exam score and their skill level. In some cases, it may be necessary to forego a written exam altogether and concentrate on a competence or skill exam.

The best way is to use both. It is very likely that your driver’s license required both a written exam and a practical exam. There are things like road signs and traffic laws that are important for you to demonstrate knowledge of, even if you are an excellent driver. The same is often true of your products.

If you do choose to create a written exam, here are a few tips that can help.

About Creating Exam Questions

There are a number of different types of question formats you can use to ask questions, and all have their strengths and weaknesses. There are also a number of things you should do when writing exam questions. Here is a list of a few of those:

  1. Get lots of feedback. Whichever type of question you ask, it is important to get as much feedback as possible prior to using the exam question to affirm comprehension. Ask feedback from both subject matter experts and non‐experts alike. It is important to verify both the clarity of the questions and the clarity of the answers. Getting as much input as possible will help you do that.
  2. Keep records of your exams and the results. If students consistently get certain questions wrong, it could indicate a weakness in the curriculum or a poorly worded question. Questions that are always answered correctly may not need to be asked at all. One way to confirm that is to keep track of the questions and answers over a period of time.
  3. Include questions from each of your main objectives. Make sure they are relevant. It is discouraging to the student when instructors ask questions about trivial data that has no bearing on their proficiency of your product.
  4. Ask questions that verify understanding. Too many tests merely determine which students are better at recalling information. If it’s not important, don’t ask it. No one says your exam has to have a certain number of questions.
  5. Use multiple types of questions. While a multiple‐choice exam may be the easiest to grade, it may not be the best way to verify understanding.
  6. When you do use multiple‐choice questions, however, make all the distractors plausible. Distractors are the wrong answers in a multiple‐choice question. Consider this example in a safety exam:
    If the fire alarm goes off, what is the proper response?
    1. Calmly exit the building using the nearest exit and proceed to your designated assembly area.
    2. Ignore it; they’re usually false anyway.
    3. Hide under the desk and scream.
    4. Call or text a friend to find out what you should do.

    Answers B, C, and D are the distractors. None of them are plausible. This question, at least with these answers, is not a good test of comprehension.

  7. If your class covers multiple days and a significant amount of material, time the exam but make it open book. This will make it clear that knowing how and where to get the information is more important than being able to recall it from memory.
  8. Avoid trick questions. Your goal is not to see who is smart enough to pass the test. You need to encourage students to study for the right reason—to become proficient in your product, not to prove that they can outwit you or other classmates. Trick questions send the wrong message to students, indicating that the goal is to pass the exam not to comprehend the material.
  9. If possible, weight the questions and even the answers. I like to use a tool that allows me to assign different values to the questions. Not all questions are equal, and not all answers to questions are equally wrong. This provides flexibility in asking simple or very difficult questions or including pilot questions and not grading them at all.
  10. Make the test part of the learning. While an exam may come at the end of a learning event, you should not presume that the learning has ended. The exam is an extension of the learning. If students struggle with a question or concept, use the exam to help them. Give students reports on their exams so that they can improve in the areas they need to, even if they scored high enough to pass the class.

About Administrating the Exam

If possible, use a tool that allows for flexibility when writing your exams. Don’t be afraid to use an online tool just because your class is offered in a traditional setting. Online exams are not exclusive to eLearning. They can just as easily be administered as part of a classroom learning event.

A good testing tool will allow you to ask multiple types of questions. Another important feature is question pooling. Pooling means that you create multiple questions for each objective and then ask for a random selection from each of the “pools.” For example, if you have five objectives, you might create ten questions for each objective. Those questions are stored in separate folders. When you create the exam, you may require two random questions from each folder, or pool, for a total of 10 questions. This means that no two students will get exactly the same exam, which helps to eliminate cheating.

Don’t feel like you have to hand out a certificate at the end of every class. This works fine when the certificate merely awards attendance, but it rarely works in a certification setting or when not all students are guaranteed to pass the course. Many online programs allow students to print their certificates, which may be the answer. Another answer is to simply mail certificates after you have had a chance to review exam and lab scores. While this creates an extra administrative step, it allows you another opportunity to connect with your students.

Assessing Their Skills

If you are not going to administrate an exam, you may choose to use one or more of the activities you designed in step 5 of the 4 × 8 Proficiency Design Model (see Chapters 8 and 9) as a skill assessment. These types of assessments can add significant value to your training, but they require significantly more effort to administrate. If you use a skill assessment, it is important to carefully define the skill you are assessing. Otherwise, you will be left guessing or hoping that students actually did the required work.

Another item that is required is what educators refer to as a grading rubric. You don’t need to call it anything fancy, but you do need one. A rubric is simply a measuring guide—often in the form of a spreadsheet—that ensures consistency and impartiality in your grading. A good measuring spreadsheet will still allow for some flexibility without being too rigid. The example in Table 7.1 shows a very simple spreadsheet, where the skill being assessed is a student’s ability to upgrade a software program.

Table 7.1 Rubric.

Points
Skill 1–3 4–6 7–10
Software upgrade Upgraded from v.2.3 to 3.0 with significant help or in greater than 10 minutes Upgraded from v.2.3 to 3.0 with moderate help or between 5 and 10 minutes Upgraded from v.2.3 to 3.0 with no help in under 5 minutes

Notice both the flexibility and the detail—the combination of both subjectivity and objectivity. This ensures against the extremes but allows for instructors to weigh in on the nonmeasurable items as well. “Significant” and “moderate” are both subjective measurements. What the students are doing (upgrading from version 2.3 to 3.0) is very specific. The timing has elements of both, giving you a little flexibility without having a new column for each possible minute.

A grading matrix is not only a helpful tool; it may be a requirement if you decide to create a certification course. Even if an instructional designer is creating the grading matrix, you will likely need to assist them. You are the subject matter expert, and you know what should be measured and what shouldn’t. It may, however, be easy to get caught up in too many details. Keep your grading as simple and practical as possible. A tool that is too complicated to use never gets used. The end result should be a consistent determination of who is qualified to do the job and who needs more practice.

Creative Assessments

As long as you can stay consistent in your measuring of students, you can get creative in your assessments. A practical exam may simply consist of a series of troubleshooting problems that students had to complete. It may include a check‐off list of items for students to self‐grade their capabilities in certain areas. In many product training settings, these types of assessments are fine to use, as long as you don’t treat some students more favorably than others.

There are other ways, of course, to test your students. Exams don’t have to be a series of written questions. They can be delivered orally, or it can be written as a paper or thesis. Remember, however, that this is product training, not a graduate degree!

Combining the Grades

Whenever possible, don’t use only the knowledge assessment as the pass/fail indicator for the course. If only one can be used, it should be the competency exam, but generally the grades should be combined. There is not a set formula or weight that is used when combining a written exam and a practical exam. It all depends on your product. This is one of the reasons why you, the expert, are involved in the training on your product. What is important, however, is consistency. You must be able to apply the same rules to everyone.

Take the time to write out the process, and make sure it will work for your most experienced trainees as well as students new to your product. If you decide to allow past experience to qualify in place of a practical assessment, be sure to qualify the expectations. Keep the process as simple as possible, but detailed enough to maintain a fair standard.

Evaluating the Class

Evaluating whether students pass a class is easy enough. Evaluating whether the class is really teaching the right thing and helping students to improve their performance and proficiency is quite another.

If you are like I was for a many years, you may administer surveys because you’re expected to, not because you see any true value in them. The results can be confusing and difficult to discern between what is good and what is bad (is a 4.2 average bad and a 4.4 average good?). You may be a better judge of whether a class went well or not than your students. When this is true, you will be tempted to throw out results or even stop doing surveys altogether. There is no need to waste your students’ time and force them to quickly fill out one more thing before they can leave.

But if you truly want to make every class better than the last one, you must know what needs to be improved. Continuous improvement doesn’t happen without a structure and plan. Eventually, you will need some evidence of improvement, or at least some ideas for areas to improve on.

The problem lies in the typical Likert‐like survey question, as shown in Table 7.2. In the training industry, this type of survey is often referred to as a “smile sheet.” It merely asks the students if they liked certain things about the class or not.

Table 7.2 Likert‐like survey.

On a scale of 1–5 (5 being best), please rate the following:
Content of the class 1 2 3 4 5
Length of the class 1 2 3 4 5
Instructor’s presentation skills 1 2 3 4 5
Others 1 2 3 4 5

Kendall Kerekes and John Mattox of CEB2 suggest ditching the useless “smile sheets” that merely ask if a student liked the training. They suggest using “SmartSheets”—evaluations that predict performance improvement—instead. They offer these alternative questions:

  • Did you learn new knowledge and skills?
  • Will you be able to apply them?
  • Will your performance improve due to training?
  • Will you have managerial support?
  • Will your improvement improve the performance of the organization?

Those are big improvements. They begin to move out of the reactionary and toward application. In his book Performance‐Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form,3 Dr Will Thalheimer suggests, as the title indicates, an even more radical approach to gathering feedback. If you are involved in writing survey questions for your training, I strongly recommend the ideas he promotes, all backed by research.

Dr Thalheimer highlights several issues with traditional smile sheets and with the Likert‐like scales they usually use.

Regarding smile sheets:

  • They are based on subjective inputs.

    Problem: While subjective input can be helpful, it can be inaccurate. We should, according to Thalheimer, be skeptical of the findings. (I guess I was right to be skeptical!) What one person feels they “agree” with, another will mark as “strongly agree,” even though they both feel the same way.

    Solution: Instead of using subjective answers, give the student objective options, or at least more descriptive answers, as in Table 7.3.

  • They are usually given immediately after training and in the context of the training environment.

    Problem: Learners are more biased toward training that is still fresh in their minds. They will also better recall the learning because they are still in the same context they learned it in.

    Solution: Survey students more than once. Ask them immediately so as to capture information that you might otherwise miss, and ask them questions after a delay so as to eliminate the bias created by the recentness of the training.

  • They often ask the wrong questions.

    Problem: If this is true in environments run by professional trainers, it is even more true in environments where subject matter experts are leading the training.

    Solution: This is the problem that Kerekes and Mattox were dealing with, and one answer is to change the questions, as they have suggested.

Table 7.3 Evaluation question (question developed by Will Thalheimer, PhD, of Work Learning Research, Inc., based on his book Performance‐Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form. SmileSheets.com.).

Now that you’ve completed the course, how able are you to TROUBLESHOOT the MOST COMMON ISSUES related to the H‐Inx Distributed Antennae System (HIDAS)?

  1. I am NOT YET SKILLED in troubleshooting the most common HIDAS issues.
  2. I have GENERAL AWARENESS of HIDAS troubleshooting, but I WILL NEED ADDITIONAL GUIDANCE to ensure a complete and reliable fix.
  3. I CAN HANDLE A LARGE PERCENTAGE of the most common HIDAS troubleshooting situations, but I will need ADDITIONAL GUIDANCE to ensure I can fix all of the most common issues.
  4. I CAN HANDLE ALL of the most common HIDAS troubleshooting situations, but I still need ADDITIONAL GUIDANCE on LESS COMMON issues.
  5. I CAN HANDLE ALL HIDAS troubleshooting situations, including the most common and the least common issues.

Regarding Likert‐like measurement scales:

  • They create poor decision making. Thalheimer argues that when students answer survey questions, they are making decisions. Having to choose between very satisfied and extremely satisfied is a poor decision.
  • They create ambiguous results. I can attest to that! I’ve seen bad instructors and ineffective training get good results and great training get questionable results. Give a bad training class in Hawaii and you are probably guaranteed to get “highly recommended” results!

Table 7.3 offers a sample question, reprinted here with permission from Dr Thalheimer.

Evaluating Perceptions

To be clear, measuring reactions is not always bad. In fact, from a marketing perspective, it is a good thing to do. If you are training customers, for example, you want to make sure that they enjoyed the time that they committed to you and that they will return or send others back. Maybe the coffee you serve is that important to them. But measuring reactions captures the student’s immediate impression. You must do more than just measure immediate impressions. You must measure the change you’ve been able to make in their behavior and/or performance.

Ask questions that force the student to move beyond the reactionary to the use. Even if the questions themselves are not as objective as an exam question, the data is still helpful to predict the effectiveness of the training, if you ask the right questions.

Here are a few additional tips about survey questions. These come from my own experience and are subject to change, depending on your circumstances. For professional help creating questions that will drive performance change, you should engage a company like CEB or Work Learning Research that specializes in gathering the right data.

  • Ask as few questions as possible. Don’t ask fewer than you need to, but don’t ask any more than you must. Too many questions will make the students rush through the answers.
  • Ask everyone. While fewer questions are a good thing, fewer people are not. If you want to start building meaningful data that can drive more effective training, ask everyone that participates in your training to fill out a survey.
  • Don’t ask questions about something you won’t or can’t change. If you have no intention of changing the temperature in the room, don’t ask the students about it. You are wasting their time and yours and diluting the survey.
  • Have a purpose or goal for every question. This is similar to the statement above, but has a slightly different application. You may be asking a question to gather data or to set up a benchmark for further training. You may want to verify that your training class aligns with company goals or with your philosophy of education. Whatever the reason, make sure you have one.
  • Don’t ask leading questions. It won’t take too many classes before you’ll realize that you can manipulate answers. The goal is real data. Data that makes the program, instructor, company, or product look good at the expense of improvement is not helpful.
  • Allow for anonymity. Depending on the questions you are answering, it may be important to allow students to remain anonymous. This is especially true if you are asking questions about the instructor’s facilitation skills, as mentioned below.
  • Ask to quote them by name. This may seem like a complete contradiction to the statement above, but it is not. Ask the students if they would like to make a statement about the class. Make sure you allow for the statement to be recorded separately from their anonymous survey. Statements from students can lend credibility to your class and you can often learn from them.
  • Provide an opportunity for written responses. Written responses are much more valuable than a checkmark or circled response. Ask them to tell you the one thing they would do differently in the next class, but also ask them what they would do the same in the next class. Let students know that you don’t want to stop doing something they think is helpful. Get feedback on what you are doing right, not just what you are doing wrong.

A Note about Measuring Instructor’s Facilitation Skills

Surveys are not exams. They are not exams for the student and they are not exams for the instructor. Too many times students get the impression that their survey is the instructor’s assessment. If the instructor has done his/her job, he/she has begun to build a professional relationship with the students. As a result, the students want to help the instructor. Now you have skewed data.

Ask questions about the instructor’s facilitation skills under two conditions:

  1. The goal must be larger than getting a particular “score” in one class. You may be changing the style of teaching and want to know how students adapt to it. You may be wanting to create an environment of continuous improvement over a long period of time. Those are good reasons to evaluate the instructor’s ability.
  2. The survey must be delivered in such a way that the students feel comfortable that the instructor will not see their answers. When an online survey is not possible, ask an administrator or someone other than the instructor to give out the survey. Let them collect the surveys and put them in an envelope while the instructor is not present. The goal is to make the students as comfortable as possible writing their true opinions.

Conclusion

Surveys and course evaluations are here to stay and I have a renewed faith in their ability to help improve the effectiveness of product training when administered correctly. To be effective as an instructor, you must measure results. You must measure that your students can perform the objectives you set for the class, and you must measure that the class itself was effective in making that happen.

Making It Practical

The reason you take surveys and give exams is to make sure that the training you are offering is effective. Good surveys can also ensure that you are always improving your training courses and listening to your students.

  1. In your own words, describe the difference between a Likert‐like question and a performance‐focused survey question.
  2. Describe the difference between assessing an individual’s knowledge and assessing their skills.
  3. In your own words, and applied to your own setting, list the benefits of using a survey to evaluate a class.

Review of Part Two: The Strategy of Hands‐On Learning

  1. Which of the principles in the last three chapters might you use to improve the product proficiency strategy in your own program? Which one(s) confirms a process or strategy that you are doing well?

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset