You are on page 1of 10

Agenda

RBUS2900 BusinessResearchMethods
Lecture8 PartA: PartB: Measurement CorrelationAnalysis

What are the steps in developing measurement? How can we assess what is good measurement? How do we interpret a correlation analysis?

Measurement
Measurement is the process of assigning numbers or labels to objects, persons, states, or events in accordance with specific rules to represent quantities or qualities of attributes.

Measurement

Measurement
Valid and reliable measurement is the foundation of all good research. Measurement is a fundamental prerequisite for testing research hypotheses, and indirectly testing theories. Researchers predict from a hypothesis that certain conditions should exist (e.g., there is a positive relationship between X and Y). Taking measures on the variables allows a hypothesis test.
5

Measurement
1. Identification of the concept of interest 2. Developing a construct 3. Providing the conceptual and operational definitions 4. Developing a measurement scale 5. Evaluating the reliability, validity and sensitivity of the measures
6

Measurement
1. Identification of the concept of interest 2.

Measurement
Develop a construct

Measurement begins by identifying a concept of interest for study. A concept is an abstract idea generalized from particular facts.

Constructs are specific types of concepts that exist at higher levels of abstraction. Concepts are typically measured with multiple variables.

Measurement
3. Define the concept 3.

Measurement
Operational definitions

A conceptual definition defines a concept with other related concepts and constructs, establishing boundaries for the construct under study; it states the central idea or concept under study. Review the literature for appropriate conceptual definitions.

How will you measure the concept? An operational definition defines which observable characteristics will be measured and the process for assigning a value to the concept. An operational definition serves as a bridge between a theoretical concept and real-world events or factors.

10

Measurement - Examples
Patients were asked: How would you rate . . . (1) the attention the provider paid to you, (2) this providers thoroughness and competence, and (3) your opportunity to ask questions of this provider. (1 very poor, to 5 excellent) The three items were combined to create a composite patient satisfaction variable.
Hekman et al. (2010, p. 244)
11

Measurement - Examples
Store loyalty can be operationalised as the average of respondents rating on the three items included in the questionnaire for measuring the construct. Store X would be my first choice Store X would be my preferred choice I would not buy an item from any other store, if the same item is available from store X (Reverse coded) (1 strongly disagree to 7 strongly agree)
12

Measurement - Examples
Firm size can be operationalised as number of employees working at the firm. Strategic focus can be operationalised as the number of business segments in which a firm operates. Repurchase behaviour can be operationalised as the number of repurchase visits and the amount of purchase spending during the last one year 4.

Measurement
Develop a measurement scale

Researchers can either select an existing scale or design some form of scale and then rely on the scale as an indicator of the measured concept. The empirical measurement should link to the theoretical concept through some rules. Likert-type scales, semantic differential scales and best-worst scaling and so on.
13 14

Measurement
What is scaling? Scaling is defined as a procedure for the assignment of numbers to the measurement of a concept. A more correct definition is that researchers assign numbers to indicators of the concept under study. The purpose of scaling is to project the quantitative information from a scale to the underlying measure
15

Measurement
What is scaling? A widely accepted scale classification system is based on the following typology: Non-Metric Scales Nominal scales Ordinal scales Metric Scales Interval scales Ratio scales

16

Measurement
Nominal Scales Useful for identification and classification Ordinal Scales

Measurement

Useful for identification and classification Relative position can be indicated

X Relative position can not be indicated X Magnitude of differences can not be compared X Ratios of scale values can not be compared. No fixed zero point- Zero point is arbitrary
17

X Magnitude of differences can not be compared X Ratios of scale values can not be compared. No fixed zero point- Zero point is arbitrary
18

Measurement
Interval Scales Useful for identification and classification Relative position can be indicated Magnitude of differences can be compared Ratio Scales

Measurement

Useful for identification and classification Relative position can be indicated Magnitude of differences can be compared Ratios of scale values can be compared. Absolute zero point

X Ratios of scale values can not be compared. No fixed zero point- Zero point is arbitrary
19

20

Measurement
5. Assess validity, reliability and sensitivity

Measurement
(i) Validity refers to the extent to which a test measures what we actually wish to measure. (ii) Reliability has to do with the consistency of a measurement procedure. (iii) Sensitivity has to do with the ability of the measurement instrument to measure variability.

Validity and reliability are characteristics of sound measurement.


Reliability Validity

Good Measurement
Sensitivity
21 22

Measurement
Validity Validity is the extent to which differences found with a measuring tool reflect true differences among those being tested. The difficulty in meeting this test is that usually one does not know what the true differences are. Annual income = disposable income? Height = store size?
23

Measurement
Assessing Validity Applications of validity commonly considered in measurement in the social sciences include: Face validity Content validity Convergent validity Discriminant validity Criterion validity
24

Construct Validity

Measurement
Face Validity A scales content logically appears to reflect what it was intended to be measured. Easier to establish with direct questions (number of children) compared to measures of concepts (brand loyalty). Content Validity

Measurement

The content validity of a measuring instrument is the extent to which it provides adequate coverage of the topic under study. If the measure(s) contains a representative sample of the universe of subject matter of interest, then content validity is achieved. This approach is based on the notion of domain sampling. Use literature and/or expert judgments for checking content validity.
25 26

Measurement
Convergent Validity Convergent validity is the extent to which the measures correlates positively with other measures of the same construct (i.e., a set of items group together).

Measurement
Discriminant Validity Discriminant validity is the extent to which a measure does not correlate with other measures from which it is supposed to differ (i.e., groups of items are different).

27

28

Measurement
Criterion Validity The ability of a measure to correlate with other standard measures of similar constructs or established criteria. This form of validity can reflect the success of measures used for some empirical estimation purpose (i.e., for prediction). A researcher or manager may want to predict some outcome (predictive validity) or estimate the existence of some current behaviour or condition (concurrent validity).
29

Measurement
Construct Validity Exists when a measure reliably measures and truthfully represents a unique concept. Construct validity is demonstrated through aggregate evidence in all the areas of validity we have discussed.

30

Measurement
Reliability A measure is reliable to the extent that it provides consistent results. Reliability is a contributor to validity and is a necessary but not sufficient condition for validity. Reliability is concerned with estimates of the degree to which a measurement is free of random error.

Measurement
Assessing Reliability Reliable instruments are robust; that is, they work well at different times and under different conditions. Frequently used perspectives on reliability include: Internal Consistency Test-Retest Reliability

31

32

Measurement
Internal Consistency Represents a measures homogeneity or the extent to which each indicator of a concepts converges on a common meaning. Internal consistency is assessed by: Split-half method Coefficient alpha Split-half method

Measurement

A method for assessing internal consistency by checking the results of one-half of the set of scaled items against the other half. The two scale halves should product similar scores and be highly correlated.

33

34

Measurement
Coefficient alpha () The most commonly applied estimate of a multipleitem scales reliability. It represents the average of all possible split-half reliabilities for a construct. Demonstrates whether or not the different items converge. Generally we want coefficient alpha to be above 0.7.

Measurement
Test-Retest Reliability Administering the same scale or measure to the same respondents at two separate points in time to test for stability. If repeated measurements of an individual's attitude are taken, a reliable measure will produce the same results each time the measurement is taken.

35

36

Measurement
Sensitivity

Measurement

A measurement instruments ability to accurately measure variability in stimuli or responses. More response categories increases sensitivity. More items increases sensitivity. Need to think about how sensitive your measurement needs to be to answer your research questions.
37 38

Measurement
Measurement Error Measurement data consists of accurate information and errors. There are many potential sources of measurement error, including the following: the respondent situational factors the interviewer the instrument
39

Measurement
Respondent Error The respondent may be reluctant to express strong positive or negative feelings, or may have little knowledge but be reluctant to admit ignorance. Respondents may also suffer from temporary factors like fatigue, boredom, and anxiety.

40

Measurement
Situational Factors Situational factor: Any condition that places a strain on the interview can have serious effects on the interviewerrespondent rapport. If another person is present, for example, that person can distort responses by joining in, by distracting, or merely by being present.

Measurement
Interviewer Error The interviewer can distort responses by rewording, paraphrasing, or reordering questions. Inflections of voice and conscious or unconscious prompting with smiles, nods, and so forth may encourage or discourage replies. (Hence the importance of training interviewers!)

41

42

Measurement
Instrument Error A defective instrument can distort in two major ways. X First, it can be too confusing and ambiguous. The use of words beyond respondent comprehension is typical. X Another type of instrument deficiency is poor sampling of the universe of items of concern.
43

CorrelationAnalysis

Correlation Analysis
A general term that refers to a number of bivariate statistical techniques used to measure the strength of a relationship between two variables. When you have two variables measured on interval or ratio scales, you can use Pearsons Correlation Coefficient or Bivariate Regression

Correlation Analysis
Correlation analysis measures the degree to which changes in one variable are associated with changes in another. Correlation does not imply causation (Rooster/Sun). Correlation analysis requires two continuous variables measured on an interval or ratio scale. The coefficient does not distinguish between independent and dependent variables. This interpretation treats variables symmetrically.
45 46

Correlation Analysis
Assumptions of Correlation Analysis Normality: bivariate distribution is normal. Metric Data: (for IV and DV) are measured on at least an interval scale. Linearity: assumption the association between the variables is linear. Covariance

Correlation Analysis

Extent to which two variables are associated systematically with each other.
rxy r yx

X i X Yi Y
n i 1 2 n i 1

Xi X Yi Y
n i 1

47

48

Correlation Analysis
The Pearson Product Moment Correlation Coefficient r varies over a range of +1 through 0 to 1. i.e. -1< < +1 The designation, r, symbolises the coefficients estimate of linear association based on sampling data. The coefficient represents the population correlation.
49

Correlation Analysis
Correlation coefficient ranges from +1 to -1 Perfect positive linear relationship = +1 Perfect negative (inverse) linear relationship = -1 No correlation = 0

50

Correlation Analysis
The magnitude is the degree to which variables move in unison or opposition.
O

Correlation Analysis
Check for Linearity Scatterplots are essential for understanding the relationship between variables (linear?). They provide a means for visual inspection of data that a list of values for two variables cannot. Both the direction and shape of a relationship are conveyed in a plot. It is also possible to detect outliers.
51 52

No association

Perfect association

The sign (plus or minus) signifies the direction of the relationship.


+

Positive Relationship -

Negative Relationship

Correlation Analysis

Correlation Analysis
Non linear Association Curvilinear Association

53

54

Correlation Analysis
A correlation matrix is a table used to display coefficients for more than two variables. It is conventional for a symmetrical matrix to report findings in the triangle below the diagonal. Correlation matrixes have utility beyond bivariate correlation studies.
0.3< weak 0.3 - 0.6 moderate >0.6 strong

Correlation Analysis

55

56

Correlation Analysis
Interpretation It is important to stress that a correlation coefficient of any magnitude or sign, regardless of statistical significance, does not imply causation. Several alternative explanations may be provided for correlation results.

Correlation Analysis
Interpretation 1. X causes Y (e.g. Advertising leads to sales) 2. Y causes X (e.g. Increased sales leads to increased advertising) 3. X and Y are activated by Z (e.g. Sales and advertising are a result of management orientation) 4. X and Y influence each other reciprocally (e.g. Sales and advertising each influences the other)
57 58

Correlation Analysis
Coefficient of Determination A measure obtained by squaring the correlation coefficient; the proportion of the total variance of a variable accounted for by another value of another variable. Measures that part of the total variance of Y that is accounted for by knowing the value of X.

Correlation Analysis Case Study


A large car manufacturer has conducted a survey to understand what people believe is important in when purchasing a new vehicle. They would like to understand if the factors associate with each other. They will use this information to inform advertising and train their sales staff.

R2

Explained variance Total Variance


59 60

10

You might also like