Chapter 3

Planning Your Evaluation Project

Donald J. Ford

In This Chapter

This chapter explores the techniques for planning a comprehensive evaluation, including planning data collection, data analysis, and project management. Upon reading this chapter, you will be able to

  • describe the importance of planning for effective evaluation
  • align stakeholder needs, program objectives, and evaluation outcomes
  • identify different techniques to collect and analyze evaluation data
  • use a comprehensive evaluation-planning tool to manage evaluation projects.
 

The Importance of Planning for Effective Evaluation

Evaluation, like everything else in life, takes careful planning to reap what one sows. The need to plan an evaluation before conducting it has always been recognized, but the scope and complexity of evaluation projects today demand even better planning than in the past. Without proper planning, evaluations often run into trouble. Data may become contaminated and conclusions drawn may be invalid. Evaluations may also fall behind schedule, go over budget, and be abandoned altogether. Thus, launching an evaluation without a plan is similar to taking a trip without an itinerary.

Project Alignment

Vital to planning evaluation is to decide who will use evaluation findings and for what purposes. This is critical to establishing the scope of the evaluation and the kinds of data that will need to be collected. Also essential to planning evaluation is encouraging stakeholders to base their decision making on facts. It is important to define the kinds of decisions that stakeholders are likely to make and determine the types of data and information that would be most useful to the decision-making process.

A second key purpose of evaluation is to determine the effectiveness of performance improvement solutions and hold accountable those responsible for designing and implementing human resource development programs. To determine program effectiveness and accountability, the right kinds of data must be collected and analyzed, and the effect of the solution must be isolated from other influences that may affect program outcomes.

Both of these key purposes—decision making and accountability—are driven by the objectives of the program being evaluated. Powerful objectives that are clearly measurable position programs for success, whereas weak objectives without measures set up a program evaluation for difficulty, if not failure.

Data Collection Planning

Evaluation data may consist of many kinds of information, from the numerical to the attitudinal. Because the types of evaluation data vary with the nature of programs and the needs of stakeholders, it is important to have a variety of data collection methods available to address a wide range of learning and performance improvement solutions.

Practitioner Tip

In planning evaluation design, I identify the outcomes of the job and make sure they are measurable. This allows for a baseline measurement of how well job performers are doing and can be compared to end-of-program outcomes to measure the impact. This gives a more reliable and accurate measure than job behavior because it is tied directly to the business goals.

Dennis Mankin, CPT

Principal Consultant, Beacon Performance Group

Techniques to Collect Evaluation Data

This section will briefly introduce the major techniques used for data collection, describe their uses, and summarize the advantages and disadvantages of each collection method. In Section II of this book, you will learn in more detail how to design and use each data collection method.

Surveys and Questionnaires

One of the most widely used data collection techniques, surveys and questionnaires, allows evaluators to collect data from large numbers of people with relative ease and can be summarized and analyzed quickly.

Tests and Assessments

Tests are the oldest form of educational evaluation and still considered to be the best gauge of learning. Though we typically think of paper-and-pencil tests, assessments for evaluation may include any of the following: written tests and quizzes, hands-on performance tests, or project/portfolio assessments.

Interviews

Probably the most widely used data collection method, the individual interview is the most flexible data collection tool available and also the easiest to deploy. All it takes is an interviewer armed with a list of questions and a subject willing to answer them. In evaluation, interview subjects are often drawn from project sponsor or client, senior managers responsible for business decisions related to the evaluation, participants in training, managers of training participants, instructors, and instructional designers responsible for training.

Focus Groups

Long a staple of market researchers, focus groups, or group interviews, also have become an important tool for evaluators. Although more widely used in needs analysis, focus groups of training participants and key stakeholders conducted after training programs can yield rich data to help understand how training affects learners and their organizations.

Action Plans

An action plan is a great tool to get learners to apply new skills on the job. It is usually created at the end of training and is meant to guide learners in applying new skills once they return to work. It may also involve their supervisors and become a more formal performance contract. For evaluation, action plans can be audited to determine if learners applied new skills and to what effect.

Case Studies

Case studies are one of the oldest methods known to evaluators. For many years, evaluators have studied individuals and organizations that have gone through training or performance improvement and written of their experiences in a case study format. This method is still widely used and forms the basis of entire evaluation systems, such as Robert Brinkerhoff’s Success Case Method (2003).

Performance Records

This category includes any existing performance data the organization already collects, often in computer databases or personnel files. Organizations now measure a massive amount of employee activity that is often relevant to training evaluators, including the following kinds of performance records: performance appraisals, individual development plans, safety, absenteeism and tardiness, turnover, output data (quantity and time), quality data (acceptance, error, and scrap rates), customer satisfaction, labor costs, and sales and revenues.

Advantages and Disadvantages of Data Collection Methods

No single data collection method can cover all the needs of evaluation and none are perfect. Table 3-1 compares the relative advantages and disadvantages of the methods described above and is a job aid in selecting data collection methods.

Data Collection Planning Tool

To put together an effective data collection plan, the following questions are key:

  • What objectives are being evaluated?
  • What measures are being used to evaluate the objectives?
  • Where are the sources of data?
  • How should data be collected?
  • When should data be collected?
  • Who should be responsible for collecting the data?

The answers to these six questions can then be assembled into a worksheet that will become the final work plan driving the data collection phase of evaluation. Table 3-2 is a tool to record planning decisions about data collection.

Table 3-3 is an example of how the data collection planning tool has been applied to evaluating a new hire orientation and on-boarding program that included initial training, facilities tour, follow-up training 30 days later, and supervisor and peer mentoring.

Table 3-1. Comparison of Data Collection Methods


Collection Method Advantages Disadvantages
Surveys and Questionnaires

• Flexible data collection tool that can be custom designed to cover a wide range of issues

• Easy to administer and analyze

• Provide a level of precision that other data collection tools lack

• Allow for large-scale participation in evaluation studies that otherwise would be cost prohibitive

• Often misused and overused in situations where they yield little valid information

• Response rates for non-mandatory surveys tend to be low, causing problems in generalizing conclusions

• Difficulty of designing good survey questions that are unambiguous and yield data that are valid and reliable

• Asking questions that are beyond the knowledge level of respondents, creating problems with inaccurate or missing data items

Tests and Assessments

• Well-developed body of science in the construction, administration, and analysis of tests

• Can verify individual competencies acquired through training

• Can judge the effectiveness of training design and implementation

• Can demonstrate the contribution of training to the development of human capital

• Time required to create tests must be justified by the need to verify acquisition of new knowledge and skill

• Unpleasant experience for learners; participant resentment and anxiety increases when personnel decisions may result from test scores

• In high-stakes testing (when a test is used to make a decision about a person’s job or career), must comply with complex legal requirements governing the use of tests in the workplace

Interviews

• Highly flexible tool that allows for in-depth exploration of issues related to evaluation

• Allow for probing of attitudes and emotions that underlie people’s behavior

• Enable evaluators to build rapport with interview subjects and “get inside their heads” to explore issues from the subject’s point of view

• Take a great deal of time, both for the evaluator and the interviewee

• Without the proper preparation, interviews provide data of limited usefulness

• Tendency for respondents to tell evaluators what they think they want to hear, rather than the unvarnished truth

• Difficulty in data analysis and drawing conclusions, due to the time-consuming task of transcribing and reviewing interview data and their subjective nature

Focus Groups

• More efficient than individual interviews

• Encourage synergy and interaction among participants that can yield higher-quality data than individuals might report in isolation

• Encourage stakeholder participation in the evaluation process, thus building support for training evaluation work

• Identify intangible issues that might not otherwise surface through other data collection efforts

• Require careful planning and design and a skillful facilitator

• Unplanned focus groups can degenerate into gripe sessions or get off topic easily

• If participants do not represent the target population, findings cannot be generalized to the larger group

• Individual participants may create problems by either over participating (monopolizing speaking time) or under participating (remaining silent)

Action Plans

• A flexible tool that can be customized for individual use

• Allow each participant to chart his or her own course in applying new skills

• Easily yields evaluation data supplied by participants themselves, in conjunction with their supervisors

• Increase the amount of skill transfer, a key goal of any training program

• Require a firm commitment from learners that is often lacking

• If supervisors are not on board, the action plan is unlikely to stick

• Some learners never get a chance to apply new skills, because their job or their manager does not allow it

• Data are often anecdotal and idiosyncratic, making them difficult to analyze and draw valid conclusions about training effectiveness

Case Studies

• Capture stories of people’s life experiences

• Long history in education and a well-developed body of literature exists to guide case study evaluation research

• Allow in-depth study of the effect of training and highlight individual differences that may be central to understanding the effectiveness of an intervention

• Require the active participation of stakeholders and their commitment to follow through and tell their stories

• Highly dependent on situational variables that cannot easily be replicated, so generalizing evaluation findings is problematic

Data Analysis Planning

Once data have been collected, it is critical to analyze these properly to draw the correct conclusions. Many techniques exist, depending on the type of data collected. This section reviews the three most common data analysis techniques used in evaluation.

Practitioner Tip

In planning data collection for classroom settings where the participant has access to a computer, an online survey is used. At the end of the class, data are automatically tabulated and provided to trainers and instructional designers so they can make immediate adjustments based on participant feedback.

Nishika de Rosairo

Human Capital, Deloitte Consulting LLP

Table 3-2. Data Collection Plan Template


DATA COLLECTION PLAN

Purpose of This Evaluation:
Program/Project:
Responsibility:
Date:

 

Level Broad Program Objective(s) Measures Data Collection Method/Instruments Data Sources Timing Responsibilities
1 Reaction and Planned Action          
2 Learning and Confidence          
3 Application and Implementation          
4 Business Impact          
5 ROI          

© The ROI Institute, 2004. Used with permission.

Table 3-3. Application of Data Collection Plan


Level Objective Measures Data Collection Method Data Sources Timing Responsibilities
1 Satisfied with learning experience % satisfied Survey Learners End of class Instructor
2 Identify company’s business model and HR policies % items correct Test Learners End of class Instructor
3 Comply with company’s HR policies % in compliance Performance records HR Supervisors First 6 months employment Employee relations/ supervisors
4 Increase retention of new hires % turnover rate Performance records HR Track for 12 months HR
5 Payback from new hire orientation

• Cost of training

• Benefit of lower turnover

• Training budget

• Performance records

• Training

• HR

After 1 year

• Training manager

• Employment manager

Techniques to Analyze Data

Statistical Analysis

Statistical analysis is appropriate whenever data are in numeric form. This is most common with performance records, surveys, and tests.

Statistics has three primary uses in evaluation:

  • summarize large amounts of numeric data, including frequencies, averages, and variances
  • determine the relationship among variables, including correlations
  • determine differences among groups and isolate effects, including t-test, analysis of variance (ANOVA), and regression analysis.

Qualitative Analysis

Qualitative analysis examines people’s perceptions, opinions, attitudes, and values—all things that are not easily reduced to a number. It addresses the subjective and the intangible, such as interviews, focus groups, observations, and case studies. Although difficult to master, this form of analysis gives a more complete in-depth understanding about how stakeholders think and feel about training and performance improvement. The data, once summarized in some form, are then analyzed to discover the following:

  • Themes: common, recurring facts and ideas that are widely expressed and agreed upon
  • Differences: disparate views and ideas expressed by different individuals and groups of people under study and the reasons for these differences
  • Deconstructed Meaning: the underlying values, beliefs, and mental models that form the cultural foundation of organizations and groups.

Isolating Program Effects

Just because we measure a result does not mean that training caused it. Organizations are complex systems subject to the influences of many variables, and isolating the effects of an individual program can be confusing and difficult. Yet, it is essential to identify the causes of increased knowledge and performance if we intend to properly evaluate training outcomes.

Financial Analysis

When evaluation is taken to the fifth level—ROI—financial analysis becomes important. This includes assembling and calculating all the costs for the program and converting the benefits to monetary values wherever possible. The primary use of financial analysis in evaluation is to calculate a return-on-investment at the end of the program. Secondary uses include forecasting potential paybacks on proposed training and determining if business goals related to financials have been achieved.

Program Cost Calculation

When an ROI study is conducted, costs have to be considered. It’s important that all stakeholders agree on the costs at the outset. Generally, costs must be fully loaded, including both direct and indirect costs, to be acceptable in financial calculations. Table 3-4 highlights common direct and indirect evaluation costs that need to be captured.

Advantages and Disadvantages of Data Analysis Methods

Like data collection methods, no single data analysis method works in all cases. Table 3-5 highlights the advantages and disadvantages of each technique to assist in making decisions about when to use each data analysis method.

Data Analysis Planning Tool

Data must be analyzed appropriately to draw the correct conclusions about the program being evaluated. Answer the following questions to help plan this crucial phase:

  • What are the needs of key stakeholders who will receive the final report?
  • What data items have been collected?
  • What data types are there?
  • How can you isolate the effects of the program/solution?
  • How can you best summarize and describe the data?
  • How can you best find relationships and differences among the data?
  • What data can be converted to monetary value?
  • What data should be reported as intangible benefits?
  • Who will be responsible for conducting the data analysis and reporting the results?

Table 3-4. Common Direct and Indirect Costs for a Program


Direct Costs Indirect Costs

• Needs assessment

• Design and development

• Delivery and implementation

• Evaluation of labor, equipment, and materials

• Participant salaries while in training

• Management time devoted to program

• Travel expenses for participants

• Administrative and overhead costs

• Cost of benefits and perks of participants and management

Table 3-5. Comparison of Data Analysis Methods


Analysis Method Advantages Disadvantages
Statistical Analysis

• Well-established with a long track record of success

• Offers a variety of techniques to treat any numerical data type

• The level of precision is unmatched by other analytical techniques

• Many tools to automate the analysis and reduce the time and cost of conducting statistical analysis

• Statistics often frighten the uninitiated. The learning curve is steep, once one goes beyond the rudimentary

• The many techniques available may result in inappropriate methods being used

• Statistical techniques in the wrong hands can be deliberately misused, leading to deception and erroneous conclusions

Qualitative Analysis

• Brings out the richness of detail and human side of evaluation

• Allows stakeholders to construct their own judgments about the effectiveness of programs and gives them a vehicle to participate in the process

• Captures the intangible benefits of training

• Useful in troubleshooting problems and documenting lessons learned

• Labor intensive

• Requires considerable skill, if done correctly; if done poorly, it produces invalid results

• Open to multiple interpretations, leading to questions of objectivity

• May be considered less reliable than hard numbers

• More difficult to convert to ROI

Isolating Program Effects

• Essential to isolate program effects from other variables

• Several techniques exist to help isolate effects

• Increases credibility of ROI

• Difficult to prove cause and effect

• Some techniques rely on subjective opinion and may be challenged

Financial Analysis

• Standard part of corporate finance that has a long history and established standards and procedures

• Puts human resource development programs on a par with other business functions that have always had to justify their expenses by showing the monetary benefits that accrue to organizations

• Arriving at the monetary value of the costs and benefits can be challenging

• When the benefits are intangible, it’s nearly impossible to calculate monetary benefits

• Considerable effort required to analyze monetary information must be weighed against the value of doing so

• ROI attempted on only a small percentage of high-value programs

Program Cost Calculation

• Must accurately capture costs to calculate ROI

• Provides greater financial visibility and control over training costs

• Cost data may not be readily available

• Deciding what to include in costs can be difficult

Using the data analysis plan worksheet in table 3-6, you can organize and plan the analysis phase down to each data item being collected. This tool also helps plan later phases, including ROI, if so desired.

Table 3-7 provides an example of how the data analysis planning tool could be applied.

In the example, based on the same new hire orientation and on-boarding program used in the data collection planning example, you can see how evaluation data items are linked to stakeholder needs and categorized by data item and type. The analysis includes methods to isolate the effect of the program from other variables, methods to describe and summarize the data, methods to draw inferences and conclusions about the data, and whether the data can be converted to monetary value or will be reported as an intangible benefit.

Practitioner Tip

In planning evaluation analysis, I ask stakeholders, “What would success look like?” This provides the organizational target we want to evaluate. We always have to remember to ask ourselves, “Why are we evaluating?” The answer tells us if we should focus on qualitative formative data or quantitative summative data and what kinds of measures and analysis we need to conduct. For instance, are we trying to decide which solution to choose among several options, justifying the training budget to the chief financial officer, or determining if we have met the expectations of key stakeholders?

Joe Willmore Principal, Willmore Consulting Group LLC

Table 3-6. Data Analysis Plan


Program:
Responsibility:
Date:

 

Stakeholder Need Data Items Data Type (Quantitative/ Qualitative) Methods to Isolate Effects Descriptive Analysis Methods Inferential Analysis Methods Monetary Value (Y/N) Intangible (Y/N)
               
               
               
               
               
               

Table 3-7. Application of Data Analysis Plan


Program: New Hire Orientation and On-Boarding program

Responsibility: (Name) Evaluator

Date: Now

 

Stakeholder Need Data Items Data Type (Quantitative / Qualitative) Methods to Isolate Effects Descriptive Analysis Methods Inferential Analysis Methods Monetary Value (Y/N) Intangible (Y/N)
Verify satisfaction with learning experience Reaction survey questions Quantitative Survey design

• Mean

• Standard deviation

 

Correlate satisfaction with learning N Y
Verify knowledge of company’s business model and HR policies Quiz—multiple choice Quantitative

• Text design

• Validity study

• Mean

• Standard deviation

• Item analysis

• Reliability study

N N
Verify compliance with company’s HR policies Initial performance appraisal Quantitative and qualitative

• Control group

• Expert opinion

Frequency distribution t-test (control vs. new hires) N N
Determine effect on retention of new hires Turnover rate Quantitative Expert opinion Turnover ratio

• Cost of turnover

• Causes of turnover

Y N
Calculate payback from program

• Program costs

• Program benefits

Quantitative Expert opinion ROI Estimated value of benefits Y Y

A Comprehensive Evaluation Planning Tool

To manage the many details of evaluation planning, a comprehensive tool is a must. The planning tool is broken into four phases so that it can be used throughout the program evaluation process to plan and capture key evaluation data.

Phase 1: Establish the Evaluation Baseline

During the needs analysis phase, begin planning the evaluation and collecting baseline information that will establish measures for the program’s objectives and allow comparison with the final results.

Phase 2: Create the Evaluation Design

During the design of the training program, create a detailed evaluation design, including the evaluation questions to be answered, the evaluation model to be used, and the methods and tools for data collection. At this time, also decide what kinds of data analysis will be conducted, based on the types of data to be collected and the nature of the evaluation questions to be answered.

Phase 3: Create the Evaluation Schedule

During the evaluation design process, create or incorporate a separate schedule for evaluation into the overall training plan. This will ensure that evaluation tasks are scheduled and milestones are met.

Phase 4: Create the Evaluation Budget

During the evaluation design process, develop a separate budget or at least separate line items for evaluation. This will ensure that evaluation work has the necessary resources to achieve its goals. Figure 3-1 is a sample of a comprehensive evaluation planning worksheet that can be used to plan out an evaluation of training or performance improvement solutions.

With this plan as a guide, evaluation becomes more manageable. It is also a great communication vehicle to share with key stakeholders so they can see the proposed scope and cost of the evaluation, along with its likely benefits and the potential payback if implementation occurs as planned. To illustrate this, figure 3-2 shows the comprehensive evaluation plan for the new hire orientation and on-boarding program described earlier.

Figure 3-1. Comprehensive Evaluation Planning Tool

 

Project:
Client:
Selected solution(s):

1. Establish the evaluation baseline.

Business goals:
Business measures:
Data sources:

Existing measur  New measure

Performance goals:
Performance measures:
Data sources:

Existing measur   New measure

Learning goals:
Learning measures:
Data sources:

Existing measur  New measure

2. Create the evaluation design.

Evaluation question(s) for business goal(s):
Evaluation question(s) for performance goal(s):
Evaluation question(s) for learning goal(s):
Evaluation design model(s):
Data collection methods:

Evaluation tools

Survey
Focus group
Interview
Case study
Performance record
Action plan
Test
Quantitative analysis methods (central tendency, dispersion, association, differences):
Qualitative analysis methods (thematic, comparative, deconstruction):
Monetary analysis methods (ROI, cost-benefit):

3. Create project evaluation schedule.

Project Name:
Action Step Target Date Responsibility Information Source or Comments
PLANNING      

1. Determine purpose and evaluation level of this project.

     

2. Review program/intervention, objectives, and content.

     

3. Identify key stakeholder expectations.

     

4. Identify data sets for Levels 5, 4, and 3 measurements (hard and soft) and determine current availability of performance data for these measures.

     

5. Identify or develop specific objectives and baseline data for each level of follow-up evaluation.

     

6. Determine responsibility for Level 1 and Level 2 evaluation (course designer, facilitator/instructor, vendor, or internal) and how these data will be provided.

     

7. Determine methods and instruments to be used and the timing of data collection.

     

8. Finalize data collection plan.

     

9. Draft data collection instruments (interviews, focus groups, questionnaires).

     

10. Review options for scannable or electronic format.

     

11. Select strategy for isolating the effects of training.

     

12. Select strategy for converting data to monetary value.

     

13. Identify costs to include in analysis.

     

14. Finalize ROI analysis plan.

     

15. Field test questionnaire and other instruments and revise as necessary.

     

16. Finalize instruments in scannable/electronic format.

     

17. Finalize letter to accompany questionnaires and get appropriate executive signature/approval.

     

18. Collect data and tabulate data per data collection plan and ROI analysis plan.

     

• Questionnaire

     

• Organization performance records

     

• Other

     

19. Tabulate costs of intervention.

     

20. Analyze data, isolate, and convert.

     

21. Develop conclusions and recommendations.

     

22. Develop report(s) for target audience(s).

     

23. Review draft report(s) with project team.

     

24. Present results to target groups.

     
FOLLOW-UP REQUIREMENTS      

25. Project critique and lessons learned.

     

26. Storage/filing of documentation.

     

27. Potential use of data reported.

     

4. Create evaluation budget.

Budget Category Budget Item Rate Amount Total
Direct labor Evaluator’s time $ daily, fully burdened # of days Rate X amount
Indirect labor Participants’ and managers’ time $ hourly, fully burdened # of total hours Rate X amount
Consultant labor Consultants’ time (if outsourcing) $ daily # of days Rate X amount
Materials Purchase, printing, shipping, distribution, and collection of evaluation materials $/item produced # of total items Rate X amount
Equipment Computers, printers, scanners, flip charts, projectors, etc. $/item Total of all items Add all items purchased/ rented
TOTAL       (sum above)

© The ROI Institute, 2004. Used with permission.

Figure 3-2. Application of Evaluation Planning Tool

Comprehensive Evaluation Planning Tool

Project: New Hire Orientation and On-Boarding program

Client: VP, HR

Selected solution(s): Initial training, facilities tour, follow-up training 30 days later, and supervisor and peer mentoring

1. Establish the evaluation baseline.

Business goals: Increase retention of new hires

Business measures: Turnover rate (as percentage of total employees annualized)

Data sources: HR Employment Dept.

Existing measure   New measure

Performance goals: Comply with company’s HR policies

Performance measures: Initial performance appraisal after six months

Data sources: HR Employee Relations Dept.

Existing measure   New measure

Learning goals: Identify company’s business model and HR policies

Learning measures: Multiple-choice test at end of class

Data sources: Learning objectives and content of class

Existing measur    New measure

2. Create the evaluation design.

Evaluation question(s) for business goal(s): Will the new on-boarding program reduce turnover to below the industry average as measured by the HR Turnover Ratio and Industry Trade Associations?

Evaluation question(s) for performance goal(s): Will the new on-boarding program help 100 percent of new hires to comply with the company’s HR policies as measured by their supervisor’s initial performance appraisal?

Evaluation question(s) for learning goal(s): Will the new on-boarding program help new hires identify key facts about the company’s business model and HR policies as measured by an end-of-class test with a 75-percent passing score?

Evaluation design model(s): Phillips’ ROI Model, quasi-experimental design with control group (recently hired employees who did not participate in the on-boarding program)

Data collection methods: survey, test, performance records

Evaluation tools

Survey Participant reaction survey

Focus group
Interview
Case study

Performance record Initial performance appraisal

Action plan

Test Postclass multiple-choice knowledge check

Quantitative analysis methods (central tendency, dispersion, association, differences): Means, standard deviation, t-test, test item analysis, turnover trending

Qualitative analysis methods (thematic, comparative, deconstruction): Open-ended survey responses: thematic and comparative analysis

Monetary analysis methods (ROI, cost-benefit): ROI

Knowledge Check: Planning Evaluation

Directions: Answer the questions below about evaluation planning, based on the content of the chapter. Check your answers in the appendix.

1. Why has evaluation planning become so important to effective evaluation?

2. What is a technique to ensure alignment among stakeholder needs, program objectives, and evaluation outcomes?

3. List seven ways to collect data.

4. What are the key questions to ask when planning data collection?

5. List three ways to analyze data.

6. What are the key questions to ask in planning data analysis?

7. What are the four phases of evaluation planning?

About the Author

Donald J. Ford, PhD, CPT, is a training and performance improvement consultant specializing in instructional design and human resource management. He has worked in human resources for more than 20 years, including internal training management positions and also as president of Training Education Management LLC, a consulting firm. For his clients, he has developed custom classroom, self-study, and web-based training; conducted performance and needs analyses; facilitated groups; managed improvement projects; taught courses; and evaluated results. Ford holds a BA and an MA in history and a PhD in education, all from UCLA. He has published 35 articles and four books on topics in training, education, and management, including his latest work, Bottom-Line Training: Performance-Based Results (2005). Ford has presented at numerous conferences and has worked overseas in Asia, Latin America, and the Middle East. He speaks Spanish and Mandarin Chinese. He may be reached at [email protected] or at www.TrainingEducationManagement.com.

References

Brinkerhoff, R. (2003). Success Case Method. San Francisco: Berrett-Koehler.

Additional Reading

Ford, D. (Jan. 2004). “Evaluating Performance Improvement,” Performance Improvement. 43(1): 36–41.

Geis, G. and M. Smith. (1999). “The Function of Evaluation,” Handbook of Human Performance Technology. 2nd Ed. Stolovitch and Keeps, eds. Pfeiffer, 130–150.

Guba, E. G. and Y. Lincoln. (1992). Effective Evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. San Francisco: Jossey-Bass.

Kirkpatrick, D. (1996). Evaluating Training Programs: The Four Levels. San Francisco: Berrett- Koehler.

Phillips, J. J. (2003). Return on Investment in Training and Performance Improvement Programs, 2nd ed. Boston: Butterworth Heinemann.

Phillips, J. J., P. P. Phillips, and T. Hodges-DeTuncq. (2004). Make Training Evaluation Work. Alexandria, VA: ASTD.

Stake, R. ed. (1967). Curriculum Evaluation. Skokie, IL: Rand McNally.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset