Level 1 – Reaction As the word implies, evaluation at this level measures how those who participate in the program react to it. This level is often measured with attitude questionnaires (smile sheets) that are passed out after most training classes. This level measures one thing: the learner’s perception (reaction) of the course.
They might be asked how well they liked the instructor’s presentation techniques, how completely the topics were covered, how valuable they perceived each module of the program, or the relevance of the program content to their specific job. They might also be asked how they plan to use their new skills back on the job.
Learners are keenly aware of what they need to know to accomplish a task. If the training program fails to satisfy their needs, a determination should be made as to whether it’s the fault of the program design or delivery.
This level is not indicative of the training’s return on investment as it does not measure what new skills the learners have acquired or what they have learned will transfer back to their working environments. This has caused some evaluators to downplay its value. However, the interest, attention and motivation of the participants are critical to the success of any training program. People learn better when they react positively to the learning environment.
Level 2 – Learning This can be defined as the extent to which participants change attitudes, improve knowledge, and increase skill as a result of attending the program. It addresses the question: Did the participants learn anything? The learning evaluation require post-testing to ascertain what skills were learned during the training. The post-testing is only valid when combined with pre-testing, so that you can differentiate between what they already knew prior to training and what they actually learned during the training program.
Measuring the learning that takes place in a training program is important in order to validate the learning objectives. Evaluating the learning that has taken place is typically focuses on such questions as:
- What knowledge was acquired?
- What skills were developed or enhanced?
- What attitudes were changed?
Learning measurements can be implemented throughout the training program, using a variety of evaluation techniques. Measurements at level 2 might indicate that a program’s instructional methods are effective or ineffective, but it will not prove if the newly acquired skills will be used back in the working environment.
Level 3 – Behavior The level of behavior is defined as the extent to which a change in behavior has occurred because the participants attended the training program. This evaluation involves testing the students capabilities to perform learned skills back on the job. Level 3 evaluations can be performed formally (testing) or informally (observation). It determines if a behavior change has occurred by answering the question, “Do people use their newly acquired skills, attitudes, or knowledge on the job?”
It is important to measure behavior because the primary purpose of training is to improve results by changing behavior. New learning is no good to an organization unless the participants actually use the new skills, attitudes or knowledge in their work activities. Since level 3 measurements must take place after the learners have returned to their jobs, the actual Level 3 measurements will typically involve someone closely involved with the learner, such as a supervisor.
Although it takes a greater effort to collect this data than it does to collect data during training, its value is important to the training department and organization. Behavior data provides insight into the transfer of learning from the classroom to the work environment and the barriers encountered when attempting to implement the new techniques learned in the program.
Level 4 – Results This is defined as the final results that occurred because the participants attended the program: the ability to apply learned skills to new and unfamiliar situations. It measures the training effectiveness, “What impact has the training achieved?” This broad category is concerned with the impact of the program on the wider community (results). It addresses the key question: Is it working and yielding value for the organization? These impacts can include such items as monetary, efficiency, moral, teams, etc. Here we expand our thinking beyond the impact on the learners who participated in the training program and begin to ask what happens to the organization as a result of the training efforts.
While it is often difficult to isolate the results of a training program, it is usually possible to link training contributions to organizational improvements. Collecting, organizing and analyzing level 4 information can be difficult, time-consuming and more costly than the other three levels, but the results are often worthwhile when viewed in the full context of its value to the organization.
As we move from level 1 to level 4, the evaluation process becomes more difficult and time-consuming, although it provides information that is of increasingly significant value. Perhaps the most frequently used measurement is Level 1 because it is the easiest to measure. However, it provides the least valuable data. Measuring results that affect the organization is more difficult and is conducted less frequently, yet yields the most valuable information…whether or not the organization is receiving a return on its training investment. Each level should be used to provide a cross set of data for measuring training program.
1 Kirkpatrick, Donald, (1994). Evaluating Training Programs. San Francisco, CA: Berrett-Koehler Publishers, Inc. (NOTE: Donald L. Kirkpatrick is a HRD Hall of Fame member.)
Other Training Evaluation Websites:
- A Model for Program Evaluation, Barry Sweeny, 1998
- The Kirkpatrick Model of Training Evaluation
- CDC’s Training Evaluation Toolkit
- A Guide to Strategically Planning Training and Measuring Results – A document prepared by the Office of Workforce Relations at the U.S. Office of Personnel Management (OPM). http://www.opm.gov/hrd/lead/spguide.pdf
- Evaluating Training– A close-to-the-original version of an article prepared for a 1992 ASTD Tool Kit edited by Karen Medsker and Don Roberts. The original version was published in three separate pieces. This one is more or less intact. http://home.att.net/~nickols/evaluate.htm
- Evaluating Training and Results – A web article offered by the Free Management Library. http://www.mapnp.org/library/trng_dev/evaluate/evaluate.htm
- Knowledge Transfer Center – A web site offered by Westinghouse Government Environmental Services Company, Department of Energy, Carlsbad Area Office. http://www.t2ed.com/
- Measuring Training Effectiveness – A web article offered by the National Centre for Vocational Education Research, Ltd., “Australia’s principal research and evaluation organisation for the vocational education and training (VET) sector in Australia.” http://www.ncver.edu.au/articles/atr24web/effect.htm
- Measuring Training’s Effectiveness/Impact – A web article offered by Zigon Performance Group featuring an article by Dr. John Sullivan, Head and Professor of Human Resource Management, College of Business at San Francisco State University. http://www.zigonperf.com/resources/pmnews/sullivan_meas_trng_eff.html
- Measuring Training Results – A web page offered by Fox Performance Training. http://www.foxperformance.com/training4.html
- Student Evaluation: A Teacher Handbook – A document produced by Saskatchewan Education. http://www.sasked.gov.sk.ca/docs/policy/studeval/index.html
- The Ten Rules for Perfect Evaluations (On Choosing Between Training Excellence and Great Evaluations) – A web article by Jay McNaught, a training analyst with Public Service Company-Indiana, published originally by Data Training Magazine in May of 1991. http://www.karinrex.com/tc_evals.html
- Why Most Training Fails – A web article originally published in Jim Clemmer’s column in The Globe & Mail. http://www.clemmer-group.com/excerpts/why_most.shtml