Professional Documents
Culture Documents
1
17.1 Introduction
In this chapter we employ Regression Analysis
to examine the relationship among quantitative
variables.
The technique is used to predict the value of one
variable (the dependent variable - y)based on
the value of other variables (independent
variables x1, x2,xk.)
2
17.2 The Model
The first order linear model
y b0 b1x
b0 and b1 are unknown,
y = dependent variable y therefore, are estimated
from the data.
x = independent variable
b0 = y-intercept
Rise
b1 = slope of the line b1 = Rise/Run
Run
= error variable b0
x
3
17.3 Estimating the Coefficients
The estimates are determined by
drawing a sample from the population of interest,
calculating sample statistics.
producing a straight line that cuts into the data.
y w The question is:
w Which straight line fits best?
w
w
w w w w w
w w w w w
w
x 4
The best line is the one that minimizes
the sum of squared vertical differences
between the points and the line.
Sum of squared differences = (2 - 1)2 + (4 - 2)2 +(1.5 - 3)2 + (3.2 - 4)2 = 6.89
Sum of squared differences = (2 -2.5)2 + (4 - 2.5)2 + (1.5 - 2.5)2 + (3.2 - 2.5)2 = 3.99
Let us compare two lines
4 (2,4)
w The second line is horizontal
3 w (4,3.2)
2.5
2
(1,2) w The smaller the sum of
w (3,1.5)
1 squared differences
the better the fit of the
1 2 3 4
line to the data. 5
To calculate the estimates of the coefficients The regression equation that estimates
that minimize the differences between the data the equation of the first order linear model
points and the line, use the formulas: is:
cov( X , Y )
b1
s 2x y b 0 b1x
b 0 y b1 x
6
Example 17.1 Relationship between odometer
reading and a used cars selling price.
A car dealer wants to find Car Odometer Price
the relationship between 1 37388 5318
2 44758 5061
the odometer reading and 3 45833 5008
the selling price of used cars. 4 30862 5795
5 31705 5784
A random sample of 100 cars 6 34010 5359
is selected, and the data . . .
. . .
recorded. . . .
Find the regression line.
Independent variable x
Dependent variable y
7
Solution
Solving by hand
To calculate b0 and b1 we need to calculate several
statistics first;
x 36,009.45; s 2x
2
( x x)
i
43,528,688
n 1
y 5,411.41; cov(X, Y)
( x x)(y
i i y)
1,356,256
n 1
where n = 100.
cov(X, Y) 1,356,256
b1 .0312
s 2x 43,528,688
b 0 y b1x 5411.41 ( .0312)(36,009.45) 6,533
y b 0 b1x 6,533 .0312 x 8
Using the computer (see file Xm17-01.xls)
Tools > Data analysis > Regression > [Shade the y range and the x range] > OK
SUMMARY OUTPUT
6000
5500
Price
Regression Statistics
Multiple R 0.806308 5000
R Square 0.650132
Adjusted R Square
0.646562
4500
Standard Error
151.5688 19000 29000 39000 49000
Observations 100 Odometer
y 6,533 .0312x
ANOVA
df SS MS F Significance F
Regression 1 4183528 4183528 182.1056 4.4435E-24
Residual 98 2251362 22973.09
Total 99 6434890
Price
5000
4500
0 No data 19000 29000 39000 49000
Odometer
y 6,533 .0312x
10
17.4 Error Variable: Required Conditions
b0 + b1x1 m1
x1 x2 x3
12
17.5 Assessing the Model
The least squares method will produce a
regression line whether or not there is a linear
relationship between x and y.
Consequently, it is important to assess how well
the linear model fits the data.
Several methods are used to assess the model:
Testing and/or estimating the coefficients.
Using descriptive measurements.
13
Sum of squares for errors
This is the sum of differences between the points
and the regression line.
It can serve as a measure of how well the line fits the
n
data. SSE ( y i y i ) 2 .
i 1
cov( X , Y )
SSE (n 1)s 2Y
s 2x
s 2Y
( y i y i ) 2
6,434,890
64,999
Calculated before
n 1 99
2
cov( X , Y ) ( 1,356,256 )
SSE (n 1)s 2Y 2
99(64,999) 2,252,363
sx 43,528,688
Thus, It is hard to assess the model based
SSE 2,251,363 on s even when compared with the
s 151.6
n2 98 mean value of y.
s 151.6, y 5,411.4 16
Testing the slope
When no linear relationship exists between two
variables, the regression line should be horizontal.
q
q
q
qq q
q
q
q q
q q
b1 b1 s
t where s b1
s b1 (n 1)s 2x
The standard error of b1.
2
[cov( X , Y )] SSE
R
2
or R 1
2
s x2 s 2y ( yi y) 2
20
To understand the significance of this coefficient
note:
The error
21
Two data points (x1,y1) and (x2,y2) of a certain sample are shown.
y2
y1
x1 x2
Total variation in y = Variation explained by the + Unexplained variation (error)
regression line)
( y1 y ) 2 ( y 2 y ) 2 (y1 y) 2 (y 2 y) 2 (y1 y1 ) 2 (y 2 y 2 ) 2
22
Variation in y = SSR + SSE
2
R 1
SSE
( y i y ) 2 SSE
SSR
(y i y) 2
(y y)
i
2
(y i y)2
26
17.7 Using the Regression Equation
Before using the regression model, we need to
assess how well it fits the data.
If we are satisfied with how well the model fits
the data, we can use it to make predictions for y.
Illustration
Predict the selling price of a three-year-old Taurus
with 40,000 miles on the odometer (Example 17.1).
y 6533 .0312x 6533 .0312( 40,000) 5,285
27
Prediction interval and confidence interval
Two intervals can be used to discover how closely
the predicted value will match the true value of y.
Prediction interval - for a particular value of y,
Confidence interval - for the expected value of y.
The prediction interval The confidence interval
1 ( x g x) 2 1 ( x g x) 2
y t 2 s 1 y t 2 s
n ( x i x) 2 n ( x i x) 2
1 ( 40,000 36,009)2
[ 6533 .0312( 40000)] 1.984(151.6) 1 5,285 303
100
4,309,340,160 29
The car dealer wants to bid on a lot of 250 Ford
Tauruses, where each car has been driven for about
40,000 miles.
Solution
The dealer needs to estimate the mean price per car.
1 ( x g x)2
The confidence interval (95%) = y t 2 s
n
( x i x)2
1 ( 40,000 36,009) 2
[ 6533 .0312( 40000)] 1.984(151.6) 5,285 35
100
4,309,340,160
30
The effect of the given value of x on the interval
As xg moves away from x the interval becomes
longer. That is, the shortest interval is found at x.
y b0 b1x g 1 ( x x ) 2
The yconfidence
t 2 s interval
g
when xg =nx
( x x)2
i
y( x g x 1)
y( x g x 1) The yconfidence 1 12
t 2 s interval
x
when xg = n 1
( xi x)2
x 2 x 1 x 1 x 2 1
The confidence interval 22
x y t 2 s
( x( x2)1)xx21 ( x 12))xx 12
when xg = xn 2
( xi x)2
31
17.8 Coefficient of correlation
The coefficient of correlation is used to measure the
strength of association between two variables.
The coefficient values range between -1 and 1.
If r = -1 (negative association) or r = +1 (positive
association) every point falls on the regression line.
If r = 0 there is no linear pattern.
The coefficient can be used to test for linear
relationship between two variables.
32
Testing the coefficient of correlation
When there are no linear relationship between two
variables, r = 0.
The hypotheses are:
H0: r = 0 Y
H1: r = 0
X
The test statistic is:
The statistic is Student t distributed
n2
t r with d.f. = n - 2, provided the variables
1 r2 are bivariate normally distributed.
35
The hypotheses are:
H0: rs = 0
H1: rs = 0
The test statistic is
cov (a, b)
rs
s a sb
a and b are the ranks of the data.
For a large sample (n > 30) rs is approximately
normally distributed
z rs n 1
36
Example 17.8
37
Aptitude Performance
Employee test rating Solution
1 59 3
2 47 2
The problem objective is
3 58 4 to analyze the
4 66 3 relationship between
5 77 2
. . . two variables.
. . .
Performance rating is
. . .
Scores range from 0 to 100 Scores range from 1 to 5 ranked.
The hypotheses are: The test statistic is rs,
H0: rs = 0 and the rejection region
H1: rs = 0 is |rs| > rcritical (taken from
the Spearman rank
correlation table). 38
Aptitude Performance
Employee test Rank(a) rating Rank(b)
1 59 9 3 10.5
2 47 3 2 3.5 Ties are broken
3 58 8 4 17 by averaging the
4 66 14 3 10.5 ranks.
5 77 20 2 3.5
. . . . .
. . . . .
. . . . .
Conclusion:
Solving by hand
Do not
Rank each
reject thevariable separately.At 5% significance
null hypothesis.
level there is insufficient
Calculate sa = 5.92; sevidence to infer that the
b =5.50; cov(a,b) = 12.34
two variable
Thus rs =are related tos one
cov(a,b)/[s another.
a b] = .379.
The critical value for = .05 and n = 20 is .450. 39
17.9 Regression Diagnostics - I
The three conditions required for the validity of
the regression analysis are:
the error variable is normally distributed.
the error variance is constant for all values of x.
The errors are independent of each other.
How can we diagnose violations of these
conditions?
40
Residual Analysis
41
RESIDUAL OUTPUT A Partial list of For each residual we calculate
Standard residuals the standard deviation as follows:
Observation Residuals Standard Residuals
1 -50.45749927 -0.334595895 sri s 1 hi w here
2 -77.82496482 -0.516076186
3 -97.33039568 -0.645421421 1 ( xi x)2
hi
4
5
223.2070978
238.4730715
1.480140312
1.58137268
n
( x j x)2
Standardized residual i =
Residual i / Standard deviation
40
30
20
We can also apply the Lilliefors test
10 or the c2 test of normality.
0
-2.5 -1.5 -0.5 0.5 1.5 2.5 More
42
Heteroscedasticity
+
^y
++
Residual
+ +
+ + + ++
+
+ + + +
+ ++ + +
+ +
+ + + ++ ++ +
+ + + y^ ++
+ + + ++ +
+ + + + +
+ +++
+ +++
+
The spread of the data points
does not change much.
44
When the requirement of a constant variance is
not violated we have homoscedasticity.
++
^y ++ +
++ ++
Residual
+ +++
+ + +++ +
+ +++
+ + + +
+ ++ +
+ + +
+ ++
+ + + + ++
+ + y^ +
+ + +
+ + + ++ +
+ ++
+ ++
As far as the even spread, this is
a much better situation
45
Nonindependence of error variables
46
Patterns in the appearance of the residuals
over time indicates that autocorrelation exists.
Residual Residual
+
+ ++
+
+ + +
+ + +
0 + 0 + +
+ Time Time
+ + + + + +
+
+
+ ++ +
+
Note the runs of positive residuals, Note the oscillating behavior of the
replaced by runs of negative residuals residuals around zero.
47
Outliers
An outlier is an observation that is unusually small or
large.
Several possibilities need to be investigated when an
outlier is observed:
There was an error in recording the value.
The point does not belong in the sample.
The observation is valid.
Identify outliers from the scatter diagram.
It is customary to suspect an observation is an outlier if
its |standard residual| > 2 48
An outlier An influential observation
+++++++++++
+ +
+ but, some outliers
+ +
+ +
may be very influential
+
+ + + +
+
+ +
+
49
Procedure for regression diagnostics