Professional Documents
Culture Documents
VARIABLES: OPERATIONAL
DEFINITION AND SCALES
PROBLEM STATEMENT
Exploration
Description
Hypothesis testing
Unit of analysis
(Population to be
studied)
Individuals
Dyads
Groups
Organizations
Machines
etc.
Types of
Investigation
Establishing:
-Casual relationships
-Correlations
-Group differences,
Sampling design
Probability/
nonprobability
Sample
Size (n)
MEASURMENT
Extent of researcher
Interference
Minimum: Studying events
as they normally occur
Moderate: Minimum
amount of interference
Maximum: High degree
of control and artificial
settings
Time
horizon
One-Shot
(cross-sectional)
Study setting
Contrived
Noncontrived
Measurement
and measures
Operational
definition
items (measure)
Scaling
Categorizing
Coding
1.
Feel for
data
2.
Goodness
of data
Data-Collection
method
Observation
Interview
Questionnaire
Multishot
(longitudinal)
DATA
ANALYSIS
Physical
measurement
Unobtrusive
3. Hypotheses
testing
OPERATIONAL DEFINITION
Reduction
In the above example the thirst is the concept, the drinking of plenty of
fluid is the dimension, and the measuring of the quantity of fluid that
they drink to quench their thirst is the element.
Gender
Sample
elements
Empirical
Observations
Mapping Rules
Symbol
Attendees
B C
Styling Characteristics
Attendees
A B C
Gender
Assign
5 if very desirable
4 if desirable
3 if neither
2 if undesirable
1 if very undesirable
Assign
M if male
F if female
(M, F)
(1 through 5)
Methods of scale
A scale is a tool or mechanism by which individuals are distinguished as
to how they differ from one another on the variable of interest to our
study.
There are four basic methods of scales: nominal, ordinal, interval, and
Nominal Scale
The lowest measurement level you can use, from a statistical point of view,
is a nominal scale, as the name implies, is simple some placing of data into
categories, without any order or structure.
SEX
1. Male
2. Female
Numbers
Ordinal Scale
Ordinal
The use of an ordinal scale implies a statement of greater than or lesser than
(an equality statement is also acceptable) without stating how much greater
or less.
Onida
Samsung :
LG
Sony
Sharpe
Median and mode, rank order correlation statistical test can be used
INTERVAL SCALE
An interval scale measure the distance between any two points on the
scale. This help us to compute the means and the standard deviations of the
responses on the variables.
In other words, the interval scale not only groups, it also measures the
magnitude of the differences in the preferences among the individuals.
It is more powerful scale than the nominal and ordinal scale, and has for
its measure of central tendency the arithmetic mean. Its measure of
dispersion are the range, the standard deviation, and the variance.
they related to your job, by circling the appropriate number against each,
using the scale given below.
Strongly
Disagree
1
Disagree
2
Neither Agree
Nor Disagree
3
Agree
Strongly
Agree
The following opportunities offered by the job are very important to me:
a.
b.
c.
d.
Serving others.
e.
Working independently.
RATIO SCALE
The ratio scale overcomes the disadvantage of the arbitrary origin point of the
interval scale, in that it has an absolute zero point, which is a meaningful
measurement point. Thus the ratio scale not only measures the magnitude of the
differences between points on the scale but also tapes the propositions in the
differences. It is most powerful of the four scales because it has a unique zero
origin (not an arbitrary origin) and subsumes all the properties of the other three
scales.
The measurement of central tendency of the ratio scale could be either the
arithmetic or the geometric mean and the measure of dispersion could be either
the standard deviation, or variance, or the coefficient of variation.
1.
2.
3.
SCALE
TYPES OF SCALES
There are two main categories of attitudinal scales.
1.Rating scale: Rating scales have several response categories and are used to
elicit responses or behavioral concept with regard to the object, events,
or person studied.
2.Ranking
objects, events, or persons and elicit the preferred choices and ranking
among them.
Rating Scale
Simple Attitude Scale
Likert Scale
Semantic Differential Scales
Numerical/Multiple Rating List Scale
STAPEL Scales
Constant-Sum Scales
Graphic Rating Scales
The simple category scale (also called a dichotomous scales) offers two
mutually exclusive response choice.
No
business standard
Others
Check any of the sources you consulted when designing your new home
Online
Magazines
Designer
Architect
others
Agree
Neutral
Disagree
Strongly Disagree
Likert Scale
The Likert scale, developed by Rensis Liker (pronounced Lick-ert), is the
most frequently used variation of the summated scale. Summated rating
Scale consist of statements that express either a favourable or an
unfavourable attitude towards the object of interest.
Typically, each scale item will have 5 categories, with scale values ranging
from -2 to 2 with 0 as neutral response.
Agree
Neutral
Disagree
Strongly
Disagree
Quality of the
Food
-1
-2
Cleanness of
the Hostel
-1
-2
Amenities
provided by the
management
Training time
intervals
Satisfaction
with the
present
Appraisal
system
This
Bad
Quite
Slightly
Neither
Slightly Quite
Extremely
Semantic Scales
unimportant
High
Low
Strong
Week
Active
Passive
Semantic Differential
Scales
Extremely
Unsafe
Savings
Account
Loan
saving
account
Certificate
of deposit
Corporate
common
stocks
Precious
metals
The respondents
10
Ranking Scale
In ranking scales, the participants directly compares two or more objects
and makes choices among the.
PAIRED COMPARISON.
FORCED CHOICE.
COMPARATIVE SCALE.
PAIRED COMPARISON
FORCED CHOICE
COMPARATIVE SCALE
It provides a benchmark or a point of reference to assess attitudes toward
the current object, event, or situation under study.
About the
Same
2
Less Useful
4
GOODNESS OF DATA
It is important to make sure that the instrument that we develop to measure
a particular concept is indeed accurately measuring the variable, and that in
fact, we are actually measuring the concept perceptual and attitudinal
measure. This ensures that in operationally defining perceptual and
attitudinal variables, we have not overlooked some important
dimensions and elements or included some irrelevant ones.
Item analysis
Item analysis is carried out to see if the items in the instrument belong there
or not. Each item is examined for its ability to discriminate between those
subjects whose total scores are high, and those with low scores.
Thereafter, test for the reliability of the instrument are carried out and the
validity of the measure is established.
Reliability
(accuracy
In
Measurement)
Goodness
of data
Logical validity
(content)
Face validity
Parallel-form reliability
Interitem consistency reliability
Consistency
Split-half reliability
Validity
(we are
Measuring
The right
Thing)
Criterion-related
validity
Predictive
Concurrent
Congruent validity
(construct)
Convergent
Discriminant
1. Content validity.
2. Criterion-related validity.
3. Construct validity
CONTENT VALIDITY
It ensures that the measure includes an adequate and representative set
of items that tap the concept. The more the scale items represent the
domain (circle of affection) or universe of the concept being measured, the
greater the content validity.
Face validity considered by some as a basic and a very minimum index of content
validity. Face validity indicates that the items that are intended to measure a concept,
so on the face of it look like they measure the concept.
CRITERION-RELATED VALIDITY
It is established when the measure differentiates individuals on a criterion it
e.g. high scores on need for achievement test predict competitive behavior in children
(ring toss game)
CONSTRUCT VALIDITY
It testified to how well the results obtained from the use of the measure fit
the theories around which the test is designed. This is assessed through
convergent and discriminant validity.
By demonstrating strong convergent validity for two different constructs and then showing
divergent validity between the two constructs, you obtain strong construct validity of the two
constructs
Aggressive
behavior
Active behavior
Teachers ratings
High Diver
gent Vali
dity
Unrelated
scores
Teachers ratings
High convergent
validity
Related scores
High convergent
validity
Related scores
Experimenters
observation
High Diver
gent Vali
dity
Unrelated
scores
Experimenters
observation
TYPES OF VALIDITY
Validity
Content Validity
Description
Method
Face Validity
Criterion-related Validity
Correlation
Concurrent Validity
Correlation
Predictive Validity
Correlation
Construct Validity
Convergent Validity
Discriminant Validity
Judgmental
Correlation of proposed test with established one
Convergent discriminant techniques
Factor analysis
Multitrait multimethods analysis
RELIBILITY
The reliability of a measure indicate the extent to which it is without bias
(error free) and hence ensures consistent measurement across time and
across the various items in the instrument. In other words, the reliability of a
measures is an indication of the stability and consistency with which the
instrument measures the concept and helps to assess the goodness of a
measure.
TEST-RETEST RELIABILITY
The reliability coefficient obtained with a repetition of the same measure
on a second occasion is called test-retest reliability. That is, when a
questionnaire containing some items that are supposed to measure a
concept is administered to a set of respondents now, and again to the same
respondents, say several weeks to 6 months later, the correlation between
the scores obtained at the two different times from one and the same set of
respondents is called the test-retest reliability.
PARALLEL-FORM RELIABILITY
When responses on two comparable sets
SPLIT-HALF RELIABILITY
Split-half
instrument. The estimates would vary depending on how the items in the
measure are split into two halves.
THANK YOU
FOR YOUR
CONCENTRATION