Chapter 11
Evaluating the Training Intervention

On one occasion a client in the UK retail sector said to us that they intended to evaluate the effectiveness of a training intervention by measuring the increase in sales revenue. The brief was to train managers and sales assistants in the in-car-entertainment (ICE) section of a large chain of motor accessory shops in the south of the UK. The training need was to improve meeting/greeting skills and develop abilities to help customers make the most appropriate choice of headsets and speakers for their vehicles.

The company intended to measure the increase in sales for the 12 weeks following the training throughout the 40 southern stores, and compare these results with the 100 stores in other parts of the country where no training was taking place. We discussed the fact that it was inevitable there were other variables that affected sales, apart from the training, but the client was determined to measure success in this way.

The training took place as planned, and reactions were measured at its conclusion in the traditional way with participants completing a course assessment form ('happy sheet'). A lot of positive comments were received this way and then we waited while sales revenue was monitored across the ICE departments of the 40 stores in which the training took place and in the control group of 100 stores. At the end of the 12 weeks it was announced that sales of headsets and speakers were up by 9 per cent in value terms in the south, whereas they had not moved at all in other parts of the country. The training was heralded as a great success!

Previous experience made us raise a lot of mental questions. What other variables could have affected the result? If the result were mainly due to the effects of the training, would the higher level of sales be maintained? Would it be possible to repeat the apparent success in the other stores if the training was rolled out across the remainder of the country? Over the next few weeks the client considered how to move forward and decided it would be beneficial to have in-house staff trained to deliver the training in the remaining stores. We embarked on this next task. While we were doing this work we learned that in the 12 weeks when the monitoring had taken place the company had introduced a new product in some of the southern stores that had helped to drive sales up. No analysis was done, however, to try to isolate the effects of this, but clearly it would have had some impact on the +9 per cent result.

This example demonstrates how difficult it is to measure with confidence the effects of training on business results. There will always be some other variables that are difficult to isolate and it is rarely possible to be certain that training alone has produced an improvement. This is true when all of the population trained are in the same country and the business is subject to a reasonably homogeneous structure and set of economic conditions. Imagine how much more difficult it is when those trained return to businesses in nine or ten countries that are all at different stages of the economic cycle and of economic development. Sales could be increasing in one country at the same time that they are decreasing in another purely through the effect of differing market conditions and regardless of the effectiveness of the training.

For any international organization attempting to deploy a centrally sponsored initiative using training as a leading driver of change, measurement of the results achieved by the training will be further complicated by local variations in its information systems, job titles, reporting structures and operating methods.

However, trainers should always welcome attempts to measure the effectiveness of learning. It would add greatly to the strength of the argument for higher and more consistent investment in training if we could discover how to quantify the results of learning in a way that relates to business results.

In this chapter we look at some ways in which training effectiveness can be evaluated and the challenges of doing this with international managers.

The process is shown in Figure 11.1. The output from the conduct stage is the participant action plans. In the evaluation stage the opportunity exists for the trainees' management to support the implementation and to observe changes in behaviour. It may also be possible to measure the results achieved for the business. During and at the end of the training programme the learning that has occurred can be tested and the participants' reactions can be measured. The learning and training evaluation reviews bring these different elements together and are the output from the evaluation stage of the SUCCESS process.

Figure 11.1 Evaluation of training process

Figure 11.1 Evaluation of training process

Structural framework

Most writers draw a distinction between the validation of training and its evaluation. This is useful in the sense that it is clearly very important to measure success against the original objectives. This is validation. If the training has met its original objectives in full by whatever measures were agreed at the time the objectives were set, then it has been positively validated. For instance, if the objective of a piece of induction training was for trainees to have an awareness of company strategy and after the training the participants reported that they now have a much better appreciation of the background, context and direction being taken by the company then it would be reasonable to say that the training had been validated.

Evaluation of training is a much broader measure of the degree of success of the training. It takes into account effects of the training that are above and beyond those envisaged in the original objectives. Kirkpatrick's model is the best-known and most respected framework for evaluating training.

Kirkpatrick envisaged four possible levels of evaluation that he named as follows:

  • Reaction - do participants like the training?
  • Learning - do they understand the training?
  • Behaviour - do they use the learning in practice?
  • Results - does using the training have any impact on results?

Organizations at 'level one - trainee driven' and 'level two - manager driven' of training commitment, described in Chapter 6, are generally content with evaluation of the training based on the trainees' verbal and written reaction to it. This rarely goes beyond the comment of 'it was a good course' or conversely 'what a waste of time and money'. 'Level three - trainee driven' organizations will look also to behavioural changes, while 'level four - strategy driven' ones will also seek to measure the business results achieved.

The Art of the Possible

While it is clearly desirable to measure the results of the training and hence the return on training investment, this is not generally practical and even if it is it can be costly. So good evaluation is very much the 'art of the possible'. The greater the level of organization training commitment, the more that will be possible.

As mentioned in Chapter 8, it is important to agree on how success is going to be measured at the outset, before the training is created. What are the key performance indicators (KPIs) going to be? For instance, if there is to be a written test of learning at the end of three days' training, the trainer needs to pay a great deal of attention to helping the participants retain the knowledge aspects of the learning. This will probably mean incorporating revision sessions to reinforce the learning. Alternatively or additionally, if the participants' bosses are going to evaluate the training by observing specified changes in their subordinates' interpersonal behaviours, then clearly skills will need to be practised during the training, using methods such as role-playing, possibly with CCTV feedback to identify positive points and behaviours where further improvement is needed during the training.

In reality the vast majority of training organized for international managers, particularly for 'level one - trainee driven' and 'level two - manager driven' organizations (see Chapter 6), is evaluated almost exclusively through session scores and end-of-course written feedback, in addition to the verbal feedback participants will make on returning to their places of work. Recognizing this, most trainers will deliberately stress those interactions that are directly related to the questions included on the course assessment forms, and make the training an enjoyable experience working towards leaving the participants on a high at the end of the course. This will help to achieve positive feedback and, done well, it may even motivate participants to apply the learning on their return to the workplace.

Feedback from participants can be obtained throughout a training programme; this will be further discussed in Part III. Having participants score and register comments on each session is one way of doing this. The trainer can achieve feedback in less formal ways from any of the participative exercises run during a training event. Participants can be asked what they consider they have learned; videos of CCTV feedback can be compared after the training to see if there was any noticeable improvement in behaviours. An exercise can be set which requires participants to recall learning from an earlier part of the course in order to complete the exercise successfully. Feedback of this type is useful in keeping a finger on the pulse as the training proceeds and to enable continuous improvement in the way the learning is delivered. It also provides useful information to feedback to the training sponsor about the aspects of the event that were particularly successful.

If bosses and colleagues (usually those working for level three and four organizations see Chapter 6) have been involved in the planning of the training they are likely to be supportive when participants return to work and so their motivation will be sustained and newly learned behaviours, knowledge and skills are likely to be evidenced, which, given everything else remains stable, should lead to an improvement in business results.

Another very practical way of evaluating learning and at the same time aiding the transfer of newly acquired skills and knowledge to the workplace is to make really good use of the end-of-session and end-of-course action plans. For instance, it may be possible to arrange, before training takes place, for participants' bosses to review with each participant the key actions that they intend to take as a result of the training and for the boss to do whatever is possible to encourage the successful implementation of these plans. This could involve the boss agreeing to allocate additional resources or to create easy access to senior people in another department to enable some change in procedures to take place. The trainer is then in a position to ask for post-event feedback on the effectiveness of the actions. This feedback can be sought from the boss and from the participant to obtain a good balance of perceptions.

Reaction

By 'reaction', Kirkpatrick meant what participants felt about the value of the training on its completion. This is usually obtained by participants completing a happy sheet as mentioned above. These assessments of reaction usually aim to evaluate the value the participants felt they gained from the training and also provide the trainer and the training sponsor with information that will help them to improve the training ready for the next time it is delivered.

An example of a format for recording participant reactions is given in Figure 11.2. This is the most common form of evaluation applied to the training of international managers today. It is often supplemented by participants being asked to score and make comments on each session of the course. Care needs to be taken that the words used are interpreted in the same way by participants with varying command of the English language. Subtle nuances illustrated by rating scales such as 'very well', 'quite well', 'not very well', 'not at all well' may be lost on many non-native speakers. Care should be taken to use simple words that clearly distinguish each element of the scale such as 'completely', 'mostly', 'partially', 'not at all'.

A second complication is that different cultures will have different perceptions of the rating scales. Northern Europeans regard life as less perfect than it could be and hence any endeavour is capable of improvement. As a result they tend not to give maximum scores to the training. On the other hand, participants from Middle and Far Eastern cultures accepting the implicit authority and greater knowledge of the trainer will out of respect and politeness tend to mark more highly.

Figure 11.2 End-of-course participant feedback

Figure 11.2 End-of-course participant feedback

Learning

By learning, Kirkpatrick meant the additional knowledge and skills that had been acquired during training. This is usually measured by a written or practical test. Learning is not often measured in this way following training of international managers. It is felt that adults, particularly in Western cultures, do not expect to be formally tested on their learning.

There are some exceptions and they usually apply to courses that result in a particular qualification, such as the recognition of competence to use a particular type of psychometric questionnaire.

Immediate learning can be tested less formally by asking breakout groups to recall key learning points from a course or by building a quiz into the course to test knowledge retention. These methods are more common to management training than is a formal written test.

Behaviour

By behaviour, Kirkpatrick meant the ways in which participants could be seen to go about aspects of their jobs differently as a direct result of a training intervention. The type of training for international managers where that would typically be evaluated in this way is where the development of interpersonal skills constitutes an important part of the programme. Some evaluation is often part of the programme itself, using feedback from fellow participants or playback of a video recording. Post-course evaluation will often take place when the participants' managers observe changed behaviours back in the workplace.

In some high power distance cultures this may be problematic because there is usually limited opportunity for the boss to observe the subordinate actually performing the job. Also there is a tendency in these cultures for managers to focus on what their subordinate is failing to do rather than what they are successfully implementing

Results

By 'results', Kirkpatrick meant the degree to which a training event directly impacts on the participants' contributions to business results. The case at the beginning of this chapter is a good example of an attempt to evaluate training in terms of its effects on business results.

For many years, accountants and trainers alike have experimented with attempts to carry out cost-benefit analysis on training. In fields such as operator training this has been done quite successfully, by equating the time saved in reaching experienced worker standard to a financial saving and comparing this with the costs of creating and conducting the training. In the case of international managers it is usually possible to assess the costs associated with the training, that is, costs of time spent preparing materials and conducting the training, venue costs, salaries of the participants during training and any other costs directly associated with the training as discussed in Chapter 5. It is, however, much more difficult to assess the benefits in financial terms, because then we are back to the need to measure the effect the training has had on business results.

Nevertheless, some inference can be drawn from changes in behaviour leading to individuals achieving particular results and the impact on the whole organization. Returning to the case discussed at the start of this chapter, it was found that the sales assistants approached customers browsing the audio section of the store with specific questions, rather than the rather general 'May I help you?', more frequently than prior to the training. In turn they converted a higher proportion into purchasers with larger than average order sizes which it was felt was likely to result in increased total sales. However, a word of caution is appropriate: converting more customers at higher sales value does not, on its own, lead to better overall sales results. It was observed that sales assistants were spending longer with each customer and therefore were dealing with fewer customers on average per day. If the higher conversion rate was not more than compensating for the reduction in customers attended to, then overall sales could be negatively impacted. Fortunately this was not the case. In turn, this then led to the identification of the next phase of training to upgrade shop assistants' capability to deal with more numbers while not decreasing their conversion rate or average order size.

Evaluating the results on an international basis is even more challenging, particularly as even in highly centralized companies different regions will operate differently with varying organization structures, systems and operating methods due to historic and cultural reasons. Merely obtaining the metrics can prove very difficult. For example, one of the KPIs for the training of key account managers was the number of projects they organized with customers. However, trying to obtain data on this required setting up a separate reporting process through each country's operating committee to whom this information was to be reported by each account manager once a month. This raised the question whether the value of the information warranted the cost and effort involved.

The Main Evaluation Methods

A summary of the main evaluation methods is shown below in Table 11.1, together with the main challenges in using them in an international context.

Output from the evaluate stage

Kirkpatrick recognized that it becomes progressively more difficult to carry out evaluation as one moves through the four levels from 'reaction' to 'results'. This does not mean it should not be attempted at the higher levels of 'behaviour' and 'results' but that the difficulty needs to be recognized.

Within a realistic time period therefore, following the course, the participants' managers and/or the HR department/trainer need to assess the effectiveness of the learning that has occurred, despite the difficulties in so doing. The performance of the participants needs to be evaluated against the learning objectives. Table 11.2 shows a suitable format to achieve this. The results achieved by each individual are reviewed against the learning objectives set and any gaps identified. This in turn leads to the identification of further training needs.

The individual learning reviews can be brought together to systematically evaluate the overall effectiveness of the training using the training evaluation report shown in Table 11.3. The planned objectives for each of Kirkpatrick's four levels of evaluation are compared with that actually achieved and comments made on the variance.

Table 11.1 The main training evaluation methods

Evaluation methodology Measures Challenges to implementation with international managers
1. Session and daily feedback Reaction Multiple completions of forms are not received well in low uncertainty avoidance cultures
2. Course evaluation form Reaction Words need to be chosen carefully and cultures with low masculinity and low uncertainty avoidance are reluctant to give maximum scores
3. Pre- and post-testing with same questionnaire Learning More acceptable in high power distance cultures
4. Written tests before, during or at end of course Learning More acceptable in high power distance cultures
5. Verbal test during course Learning Those who are fluent in the course language will respond first
6. Line manager's observation Behaviour In high power distance cultures managers often tend to focus on negative rather than positive behaviours
7. Role plays and exercises Learning and behaviour Always more difficult in a second language
8. Tracking output Results The system enablers to measure the outputs need to be in place wherever the trainees are working

Table 11.2 Learning review

Name Learning objectives Results achieved Further training needs
 





 





 





Table 11.3 Training evaluation report

  Anticipated/planned When expected When achieved Comments
Reaction        
Learning        
Behaviour        
Results        

Summary

When there is evidence that the objectives of a training intervention have been met in full, using whatever measure was agreed at the outset, it could be said that the training has been validated positively.

However, Kirkpatrick suggests that training can be further evaluated at different levels. The first level of evaluation is that of 'reaction', which is usually achieved by asking participants to record their reactions and feelings immediately after the training has finished. The trainer needs to be aware of the likely effect of the participants' cultural background on the responses to questionnaires.

The second level of evaluation is that of 'learning', which is usually achieved by asking participants to take a test at the end of the training. This, except for more junior staff, tends to go against accepted norms in Western cultures and is only used in a minority of management training situations.

The third level is that of 'behaviour', which is usually evaluated by bosses and peers observing a participant's behaviour after attendance at a training event. In high power distance cultures there may be a tendency for managers to focus on what their subordinates are failing to implement rather than reinforcing positive behavioural change.

The fourth level of evaluation is that of 'results', when a participant's contribution to business results is measured following a training event. This is particularly challenging in the case of international managers because of the diversity of countries and the situations in which they work.

The learning review form provides a means of assessing the extent that each individual has achieved their learning objectives. This achievement can be brought together in the training evaluation report form to assess the overall impact of the training.

There are particular advantages in focusing participants and their bosses on post-event evaluation through the learning and training evaluation reviews which are the outputs from this stage. These will help to promote the notion that the purpose of training is to achieve lasting change as discussed further in the next two chapters.

Action plan

For the trainer

Consider a training event that is planned to take place in the fairly near future, and consider the type of feedback that it would be useful to receive from participants and their bosses a few months after the participants have returned to workplace. This feedback will be used to evaluate the training.

Construct a format for the feedback, including as many of the following aspects (and others) as you feel are relevant to the training intervention that you have in mind. Bear in mind the culture of your participants and their bosses. Consider attempting to achieve buy-in from the participants and their bosses so that they complete this feedback for you.

For the participant

  • In which aspects of your job do you feel you have taken a different approach since the training?
  • Which particular tools or elements of learning from the training have you used since returning to work?
  • What additional aspect of learning would have been beneficial to you?
  • Which parts of your action plan do feel you have implemented successfully?
  • In which parts of your action plan do you feel you need additional support to bring about a successful implementation?
  • How would you rate the level of support you have received?
  • Have you had the appropriate information made available to you to enable you to implement your action plan?
  • What have you told people (colleagues, subordinates, superiors, friends, family) about the training?
  • What has made it difficult for you to implement what you have learned?
  • What do you need to overcome these difficulties?
  • How helpful have your interactions with the trainer been since the course?
  • How could the training programme be improved?
  • What further training and development is needed for you to be able to implement all you have learned?

For the participant's boss

  • In what ways do you see a change in the participant's behaviour following the training?
  • Which aspects of the job is the participant now approaching differently?
  • What benefits should this bring to the organization?
  • What are the key performance indicators?
  • What changes have been observed in the key performance indicators?
  • How has the participant shared their end-of-course action plan with you?
  • Which aspects of the action plan are being implemented successfully?
  • With which aspects of the action plan are you giving support in implementation?
  • What have you been told about the course by the participant?
  • How is the participant using the new knowledge gained?
  • In what way has the training helped prepare the participant for their likely next job?
  • How could the training programme be improved?
  • What further training and development do you feel is needed to fully implement the learning?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset