You are on page 1of 38

CHAPTER ELEVEN

Training Evaluation

2013 by Nelson Education Ltd.

LEARNING OUTCOMES
Define training evaluation and the main reasons
for conducting evaluations
Discuss the barriers to evaluation and the factors
that affect whether or not an evaluation is
conducted
Describe the different types of evaluations
Describe the models of training evaluation and
the relationship among them
2013 by Nelson Education Ltd.

LEARNING OUTCOMES
Describe the main variables to measure in a
training evaluation and how they are measured
Discuss the different types of designs for training
evaluation as well as their requirements, limits,
and when they should be used

2013 by Nelson Education Ltd.

INSTRUCTIONAL SYSTEMS
DESIGN MODEL

2013 by Nelson Education Ltd.

INSTRUCTIONAL SYSTEMS
DESIGN MODEL
Training evaluation is the third step of the ISD model
and consists of two parts:
The evaluation criteria (what is being measured)
Evaluation design (how it will be measured)
These concepts are covered in the next two chapters
Each has a specific and important role to play in the
effective evaluation of training and the completion of
the ISD model
2013 by Nelson Education Ltd.

TRAINING EVALUATION

Process to assess the value the worthiness


of training programs to employees and to
organizations

2013 by Nelson Education Ltd.

TRAINING EVALUATION
Not a single procedure; a continuum of techniques,
methods, and measures
Ranges from simple to elaborate procedures
The more elaborate the procedure, the more
complete the results, yet usually the more costly
(time, resources)
Need to select the procedure based on what makes
sense and what can add value within resources
available
2013 by Nelson Education Ltd.

WHY A TRAINING
EVALUATION?
Improve managerial responsibility toward training
Assist managers in identifying what, and who,
should be trained
Determine costbenefits of a program
Determine if training program has achieved
expected results
Diagnose strengths and weaknesses of a program
and pinpoint needed improvements
Justify and reinforce the value of training
2013 by Nelson Education Ltd.

DO WE EVALUATE?
There has been a steady decline in determining
ROI Level 3 and 4 evaluation

2013 by Nelson Education Ltd.

BARRIERS TO
EVALUATION
Barriers fall into two categories:
1. Pragmatic
Requires specialized knowledge and can be
intimidating
Data collection can be costly and time
consuming
2. Political
Potential to reveal ineffectiveness of training
2013 by Nelson Education Ltd.

10

TYPES OF
EVALUATION
Evaluations may be distinguished from
each other with respect to:
1. The data gathered and analyzed
2. The fundamental purpose of the evaluation

2013 by Nelson Education Ltd.

11

TYPES OF
EVALUATION
1. The data gathered and analyzed
a. Trainee perceptions, learning, and behaviour
at the conclusion of training
b. Assessing psychological forces that operate
during training
c. Information about the work environment
Transfer climate and learning culture

2013 by Nelson Education Ltd.

12

TYPES OF
EVALUATION
2. The purpose of the evaluation
a. Formative: Provide data about various aspects of a
training program
b. Summative: Provide data about worthiness or
effectiveness of a training program
c. Descriptive: Provide information that describes the
trainee once they have completed a training program
d. Causal: Provide information to determine if training
caused the post-training behaviours

2013 by Nelson Education Ltd.

13

MODELS OF
EVALUATION
A. Kirkpatricks Hierarchical Model
Oldest, best known, and most frequently used
model
The Four Levels of Training Evaluation:

Level 1: Reactions

Level 2: Learning

Level 3: Behaviours

Level 4: Results

ROI
2013 by Nelson Education Ltd.

14

CRITIQUE OF
EVALUATION
There is general agreement that the five
levels are important outcomes to be assessed
There are some critiques:
Doubt about the validity
Insufficiently diagnostic
Kirkpatrick requires all training evaluations
to rely on the same variables and outcome
measures
2013 by Nelson Education Ltd.

15

MODELS OF
EVALUATION
B. COMA Model
A training evaluation model that involves the
measurement of four types of variables
1.
2.
3.
4.

Cognitive
Organizational Environment
Motivation
Attitudes

2013 by Nelson Education Ltd.

16

MODELS OF
EVALUATION
The COMA model improves on Kirkpatricks model in
four ways:
1. Transforms the typical reaction by incorporating greater
number of measures
2. Useful for formative evaluations
3. The measures are known to be causally related to training
success
4. Defines new variables with greater precision
Note: Relatively new model too early to draw conclusions as
to its value
2013 by Nelson Education Ltd.

17

MODELS OF
EVALUATION
C. Decision-Based Evaluation Model
A training evaluation model that specifies the
target, focus, and methods of evaluation

2013 by Nelson Education Ltd.

18

MODELS FOR
TRANSFER
Decision-Based Evaluation Model
Goes further than either of the two preceding
models:
1. Identifies the target of the evaluation
Trainee change, organization payoff, program improvement

2.
3.
4.
5.

Identifies its focus (variables measured)


Suggest methods
General to any evaluation goals
Flexibility: Guided by target of evaluation
2013 by Nelson Education Ltd.

19

MODELS FOR
TRANSFER
As with COMA, the DBE model is recent and will
need to be tested more fully
All three models require specialized knowledge to
complete the evaluation; this can limit their use in
organizations without this knowledge
Holton and colleagues Learning Transfer System
Inventory (seen in Chapter 10) provides a more
generic approach
See Training Today 11.2 for more on its use for evaluation

2013 by Nelson Education Ltd.

20

MODELS FOR
TRANSFER
Training evaluation requires data be collected on
important aspects of training
Some of these variables have been identified in the
three models of evaluation
A more complete list of variables is presented in
Table 11.1, and Table 11.2 shows sample questions
and formats for measuring each type of variable

2013 by Nelson Education Ltd.

21

EVALUATION
VARIABLES
A.
B.
C.
D.
E.
F.
G.
H.

Reactions
Learning
Behaviour
Motivation
Self-efficacy
Perceived/anticipated support
Organizational perceptions
Organizational results
2013 by Nelson Education Ltd.

See Table 11.2 in text

22

VARIABLES
A. Reactions
1. Affective reactions: Measures that assess
trainees likes and dislikes of a training
program
2. Utility reactions: Measures that assess the
perceived usefulness of a training program

2013 by Nelson Education Ltd.

23

VARIABLES
B. Learning
Learning outcomes can be measured by:
1. Declarative learning: Refers to the
acquisition of facts and information, and is
by far the most frequently assessed learning
measure
2. Procedural learning: Refers to the
organization of facts and information into a
smooth behavioural sequence
2013 by Nelson Education Ltd.

24

VARIABLES
C. Behaviour
Behaviours can be measured using three
approaches:
1. Self-reports
2. Observations
3. Production indicators

2013 by Nelson Education Ltd.

25

VARIABLES
D. Motivation
Two types of motivation in the training context:
1. Motivation to learn
2. Motivation to apply the skill on-the-job (transfer)

E. Self-Efficacy
Beliefs that trainees have about their ability to
perform the behaviours that were taught in a
training program
2013 by Nelson Education Ltd.

26

VARIABLES
F. Perceived and/or Anticipated Support
Two important measures of support:
1. Perceived support: The degree to which the
trainee reports receiving support in attempts to
transfer the learned skills
2. Anticipated support: The degree to which the
trainee expects to supported in attempts to
transfer the learned skills

2013 by Nelson Education Ltd.

27

VARIABLES
G. Organizational Perceptions
Two scales designed to measure perceptions:
1. Transfer climate: Can be assessed via a
questionnaire that identifies eight sets of
cues
2. Continuous learning culture: Can be assessed
via questionnaire presented in Trainers
Notebook 4.1 in Chapter 4 of the text
2013 by Nelson Education Ltd.

28

VARIABLES
G. Organizational Perceptions

(cont'd)

Transfer climate cures include:


Goal cues
Social cues
Task and structural cues
Positive feedback
Negative feedback
Punishment
No feedback
Self-control
2013 by Nelson Education Ltd.

29

VARIABLES
H. Organizational Results
Results information includes:
1. Hard data: Results measured objectively
(e.g., number of items sold)
2. Soft data: Results assessed through
perceptions and judgments (e.g., attitudes)
3. Return on expectations: Measurement of a
training programs ability to meet managerial
expectations
2013 by Nelson Education Ltd.

30

DESIGNS IN TRAINING
EVALUATION
The manner with which the data collection is
organized and how the data will be analyzed
All data collection designs compare the trained
person to something

2013 by Nelson Education Ltd.

31

DESIGNS IN TRAINING
EVALUATION
1. Non-experimental designs: Comparison is made
to a standard and not to another group of
(untrained) people
2. Experimental designs: Trained group compared
to another group that does not receive the training
assignment is random
3. Quasi-experimental designs: Trained group is
compared to another group that does not receive
the training; assignment is not random
2013 by Nelson Education Ltd.

32

DATA COLLECTION
DESIGN

2013 by Nelson Education Ltd.

33

DATA COLLECTION
DESIGN

Pre

Post

Pre

A: Single group post-only design


(Non-experimental)

Post

B: Single group pre-post Design


(Non-experimental)

2013 by Nelson Education Ltd.

34

DATA COLLECTION
DESIGN
Trained

Pre

Post

C: Time series design


(Non-experimental)

Untrained

Pre

Post

D: Single group design


with control group
2013 by Nelson Education Ltd.

35

DATA COLLECTION
DESIGN
Trained

Pre

Post

E: Pre-post design
with control group

Untrained

Pre

Post

F: Time series design


with control group
2013 by Nelson Education Ltd.

36

DATA COLLECTION
DESIGN

Training on Relevant Items


Training on Irrelevant Items

Pre

Post

G: Internal Referencing Strategy

2013 by Nelson Education Ltd.

37

SUMMARY
Discussed the main purposes for evaluating training
programs as well as the barriers
Presented, critiqued, and contrasted three models of training
(Kirkpatrick, COMA, and DBE)
Recognized that Kirkpatrick model is most frequently used,
yet has limitations
Discussed the variables required for an evaluation as well as
methods and techniques required to measure them
Presented the main types of data collections designs
Discussed factors influencing choice of data collection
designs
2013 by Nelson Education Ltd.

38

You might also like