You are on page 1of 10

http://kevinforgard.wordpress.

com/

http://kaiwenkevin.blogspot.com/

kevinforgard@gmail.com
In a bit of a puzzle…

Match the pieces


in your group.
Kirkpatrick’s Evaluation
Framework

Share your
Why are these levels
important to you?
What levels are learning professionals evaluating?

Level 1 _____%

Level 2 _____%

Level 3 _____%

Level 4 _____%

What the research


Source: ASTD 2010 State of the Industry Report
What is Evaluation? Notes
Evaluation = Value
The use of research techniques to make an objective judgment on
the value or worth of an education or training program. An
evaluation helps “determine impact, to establish what’s working,
what’s not, and why” in order to influence improvements (Rossett
& Sheldon, 2001, p. 102).
When conducting an evaluation certain things are examined,
which Kirkpatrick’s levels of evaluation model helps us
define focus. So, why do we
evaluate?
• Examine effects (results focus) – training was
accepted, it achieved its objectives, people applied
information, there were accomplishments
• Check program fidelity (process focus) – training
meets the client’s needs
Evaluation Notes

Approaches
Theorist(s) Approach
Kirkpatrick Four levels
Phillips Kirkpatrick’s four levels plus ROI
Rossett & Goal-based – Compare objectives
Sheldon with outcomes to determine value
(2001) Goal-free – Examines the intended
and unintended impact of training
Constructivist Useful for web-based instructional
Approach “learning objects”: help systems,
navigation aids, game-base
learning, social-networked
learning, etc.
Value of the Kirkpatrick/Phillips Evaluation Levels

Level 5 - ROI

Level 4 - Results

Level 3 - Behavior

Level 2 - Learning

Level 1 - Reaction

0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% 80.0%

Source: ASTD 2010 State of the Industry Report


Problem Scenario
A company that conducts its business mainly through phone and internet sales has recently introduced a
training program to teach employees about product and system information. After an initial face-to-face
‘new hire training’, employees are expected to log into a training network and access screencast training
files to update their knowledge and skills. The system is somewhat innovative in that the training intranet
website works similar to a social network, where co-workers can ‘friend’ each other, create a profile page,
and send messages through an open forum. In addition to offering training screencasts, the system also
tracks who has accessed the modules and when, which can be accessed through a report query. If an
employee is given a new product line, or new tasks are added to their job, they are asked to go through a
particular set of training modules. All the modules are accessible to everyone, so some ambitious
employees go through trainings they do not necessarily use.

After this program has been in use for a year, the company stakeholders ask the HR Training Director to
conduct an evaluation of the program. This is prompted by the fact that profits have been down for the
past two quarters and data that says that the training website is only being accessed by 50-60% of the
workforce. The company leadership is expecting at least 90% and ideally wants 100% of employees to be
using the system as envisioned. The HR Training Director is a little confused because she has been
collecting evaluation data since the beginning and conducts regular polls on the training intranet site.

•What type of data would you collect from the employees?


•How would you collect the data?
•What questions would you seek to answer from these data sources?
•Other than the Level One data, which is being collected, how would you collect data for the other levels?
References
The ASTD Handbook of Measuring and Evaluating Training
http://www.astd.org/content/publications/ASTDPress/HandbookofMeasuringandEvaluatingTraining.htm

Kenneth H. Silber and Wellesley R. Foshay (2010). Handbook of Improving Performance in the Workplace Volume
One: Instructional Design and Training Delivery. Specifically Chapter 16: Steven M. Ross & Gary R. Morrison, The
role of evaluation in instructional design, 554-576.

The Value of Evaluation: Making Training Evaluations More Effective (An ASTD Research Study) (2009).

Stephen Smith (2008). Why follow levels when you can build bridges?, Training & Development, September Issue.

ASTD 2010 State of the Industry Report

Allison Rossett & Kendra Sheldon (2001). Beyond the Podium: Delivering Training and Performance to a Digital
World. San Francisco, CA: Jossey-Bass.

The Program Evaluation Standards (2nd Edition). Thousand Oaks, CA: Sage

Patricia Phillips & Jack Phillips (2008). ROI Fundamentals: Why and When to Measure ROI. San Francisco, CA:
Pfeiffer.

Stephen Brown & Constance Seidner (1998). Evaluating Corporate Training: Models and Issues. Boston, MA: Kluwer
Academic Publishers.

Karie Willyerd & Gene Pease (2011). How does social learning measure up? Training & Development, January.

You might also like