You are on page 1of 2

A few years ago a colleague sent me a research article.

The article stated that 90 percent of training


resources are devoted to the design, development and delivery of training, yet only 15 percent of what is
learned transfers to the job (Brinkerhoff, 2006). After reading this article, I not only started digging up
more research but I also quickly realized that I didnt have a process or set of tools for evaluating whether
our training program at the Lawrence Berkeley National Laboratory (LBNL) was effective.

When I talked to my colleagues, I found out I wasnt alone. All of us were evaluating whether participants
valued the training and whether they actually learned. Beyond that, many of us also struggled with
having the time or resources to evaluate whether our training was having a positive effect on safe work
performance or contributing to the success of our organizational goals.

As I began to research the topic of training evaluation, I discovered that there was one dominant model
used to evaluate training effectiveness. It is the Kirkpatrick Model. In short, the Kirkpatrick Model is built
around a four-step process, in which each step (or level) adds precision, but also requires more time-
consuming analysis and greater cost.

The following is a brief overview of each step:

Level One: Evaluating Reactions: Measures how participants value the training. Determines whether
participants were engaged, and whether they believe they can apply what they learned.
Evaluation tools include end-of-course surveys that collect whether participants are satisfied with the
training, and whether they believe the training is effective.
Level Two: Evaluating Learning: Measures whether participants actually learned from the training.
Evaluation tools include:
Pre-test and post-tests and quizzes
Observation (i.e. Did person execute a particular skill effectively?)
Successful completion of activities
Level Three: Evaluating Behavior: Measures whether training had a positive effect on job performance
(transfer). This is a cost-benefit decision, because this can be resource-intensive to evaluate, requiring a
more time-consuming analysis. It may be that a level three is performed for safety skills that have a high
consequence to error, where you want to make sure safety skills/performance transfer to the job.
Evaluation tools include:
Work observation
Focus groups
Interviews with workers and management
Level Four: Evaluating Results: Measures whether the training is achieving results. Is the training
improving safety performance? Has training resulted in better quality, increased productivity, increased
sales and better customer service? The challenge here is that there are many factors that will influence
performance, so it is difficult to correlate increased performance to training alone.
Evaluations include:
Measure reduction in number, or severity, of incidents or accidents compared against the
organizations performance (or contract goals).
Measure reduction in total recordable cases (TRC)
Measure reduction in DART rate (days away, or restricted work)
When it comes to evaluating training effectiveness at your organization, what methods do you use? Has
the Kirkpatrick Model worked for you? Which metrics do you collect? How have you evolved your training
programs based on this kind of analysis?

James Basore will share more details about his training approach in the Driving Success
Through Effective and Efficient EHS Training session at NAEMs EHS Management Forum on
Oct. 17-19 in Naples, Fla.

About James Basore


James Basore is the Training Manager for the Environment Health and Safety Division at the Department
of Energy's Lawrence Berkeley National Laboratory. He is also a member of the University of California's
System-wide Training and Education Working Group, and the Department of Energy's Cross-complex
Learning & Training Team. Both are focused on improving the efficiency and efficacy of ES&H training
within their respective ecosystems.

View all post by James Basore

You might also like