Professional Documents
Culture Documents
An overview of measurement
Measurement is central to the process of obtaining
data. How, and how well, the measurements in a
research project are made are critical in determining
whether the research project will be a success.
Learning
D D D
Understanding Retention (recall) Application
E E E E E
Answer Give Recall Solve problems Integrate with
question appropriate material after applying concepts other relevant
correctly example some lapse of understood and material
time recalled
C
Achievement
motivation
D1 D2 D3 D4 D5
Driven by work Unable to relax Impatience with Seeks moderate Seeks feedback
ineffectiveness challenge
E E
E E E
Constantly Persevering
despite Swears under Opts to do a Opts to take
working one’s breath challenging moderate, rather
setbacks
when even small rather than a then
mistake occur routine job overwhelming
E challenging
E
Very Does not like to
reluctant to work with slow or
take time inefficient people
off for
anything E E
Asks for Is implement for
E E
feedback on how immediate
Thinks of work Does not have the job has feedback
even at home any hobbies been done
Operationalization s
• Conceptualization is the refinement and
specification of abstract concepts.
• Operationalization is the development of
specific research procedures that will result in
empirical observations representing those
concepts in the real world.
1. Conceptual Definition
Concept - A generalized idea about a class of
objects, attributes, occurrences, or processes.
Examples: Gender, Age, Education, brand
loyalty, satisfaction, attitude, market
orientation
Note: All statistics appropriate for lower-order scales (nominal being lowest) are appropriate for
higher-order scales (ratio being the highest)
Likert Scale
Strongly Disagree Disagree Neutral Agree Strongly Agree
1 2 3 4 5
1. Reliability
2. Validity
3. Contextually Sensitive
RELIABILITY
Reliability Splitting
halves
Internal
Consistency
Equivalent
forms
Assessing Stability (Repeatability)
• Stability the extent to which results obtained
with the measure can be reproduced.
1. Test-Retest Method
• Administering the same scale or measure to the same
respondents at two separate points in time to test for
stability.
2. Test-Retest Reliability Problems
• The pre-measure, or first measure, may sensitize the
respondents and subsequently influence the results of the
second measure.
• Time effects that produce changes in attitude or other
maturation of the subjects.
Assessing Internal Consistency
• Internal Consistency: the degree of homogeneity
among the items in a scale or measure
1. Split-half Method
• Assessing internal consistency by checking the results of one-
half of a set of scaled items against the results from the other
half.
• Coefficient alpha (α)
– The most commonly applied estimate of a multiple item
scale’s reliability.
– Represents the average of all possible split-half reliabilities
for a construct.
2. Equivalent forms
• Assessing internal consistency by using two scales designed to
be as equivalent as possible.
VALIDITY
• The accuracy of a measure or the extent to which
a score truthfully represents a concept.
• The ability of a measure (scale) to measure what
it is intended measure.
• Establishing validity involves answers to the ff:
– Is there a consensus that the scale measures what it is
supposed to measure?
– Does the measure correlate with other measures of
the same concept?
– Does the behavior expected from the measure predict
actual observed behavior?
Validity
Concurrent Predictive
ASSESSING VALIDITY
1. Face or content validity: The subjective agreement
among professionals that a scale logically appears to
measure what it is intended to measure.
2. Criterion Validity: the degree of correlation of a
measure with other standard measures of the same
construct.
• Concurrent Validity: the new measure/scale is taken at
same time as criterion measure.
• Predictive Validity: new measure is able to predict a future
event / measure (the criterion measure).
3. Construct Validity: degree to which a measure/scale
confirms a network of related hypotheses generated
from theory based on the concepts.
• Convergent Validity.
• Discriminant Validity.
Relationship Between Reliability & Validity
NOTE:
Content validity is rarely represented by numerical figure because it is a
logical process to comparing the components of a variable to items of a
measure.
THE PROCESS OF ESTIMATING CONCURRENT
Exploratory Research
Formulation of Hypotheses
Planning
Information required
Stage
Population of relevance
Target group
Method of data collection
Order of topics
Wording and instructions
Type of question
Design Layout
Stage Scales
Probes and prompts
Pilot testing Pilot
Is design efficient? Stage
Coding
Time and cost
Final questionnaire
DESIGN QUESTIONNAIRES OR INTERVIEW SCHEDULS
• Will item yield data in the form required by the hypotheses or research
questions and the operational definitions?
• Will item yield data at the level of measurement required for the selected
statistical analysis?
• Does item avoid “leading” respondent to a specific response?
• Is item unbiased?
• Will most respondents have sufficient knowledge to answer item?
• Will most respondents be willing to answer item?
• Will most respondents answer item truthfully?