Professional Documents
Culture Documents
We have been introduced to the notion that a categorical variable could depend on different
levels of another variable when we discussed contingency tables.
We’ll extend this idea to the case of predicting a continuous response variable from different
levels of another variable. We say the variable, Y, is the response variable dependent on an
explanatory predictor variable, X.
There are many examples in the life sciences of such a situation – height to predict weight,
dose of an algaecide to predict algae growth, skinfold measurements to predict total body fat,
etc… Often times, several predictors are used to make a prediction of one variable (ex. height,
weight, age, smoking status, gender can all be used to predict blood pressure).
We focus on the special case of using one predictor variable for a response, where the
relationship is linear.
Example 12.3 In a study of a free living population of the snake Vipera bertis, researchers
caught and measured nine adult females. The goal is to predict weight (Y) from length (X).
The data and a scatterplot of the data are below.
Notice this data comes in pairs. For example (x1,y1) = (60, 136)
First, we look at a scatterplot of the data. We’d like to fit a (straight) line to the data. Why
linear?
Does fitting a (straight line) seem reasonable?
Simple Linear Model (Regression Equation)
The simple linear model relating Y and X is
Y = bO + b1X
bO is the intercept, the point where the line
crosses the Y axis
b1 is the slope, the change in Y over
the change in X (rise over run)
Definition: A predicted value (or fitted value) is the predicted value of yi for a given xi based
on the regression equation, bO + b1xi
Notation: yi = bO + b1xi
A residual is the departure from Y of a fitted value.
Notation: residi = yi ‐ yi
Which line do we fit?
We will fit a line that goes through the data in the best way possible, based on the least
squares criterion.
Definition: The residual sum of squares (a.k.a. SS(resid) or SSE) is
n
2
SS resid SSE yi ‐yi
i 1
The least squares criterion states that the optimal fit of a model to data occurs when the
SS(resid) is as small as possible. Note that under our model
n n
2 2
SS resid SSE yi ‐yi yi ‐ bO b1 xi
i 1 i 1
Refer to the applet at http://standards.nctm.org/document/eexamples/chap7/7.4/
Regression and Correlation Page 2
Using calculus to minimize the SSE, we find the coefficients for the regression equation.
∑ni 1 xi ‐x yi ‐y
b1
∑ni 1 xi ‐x 2
b0 y‐ b1 x
TI‐83/84
Enter the data into two lists.
STAT ‐> TESTS ‐> LinRegTTest We’ll go over the options in class.
Example 12.3
Find the linear regression of weight (Y) on Length (X).
Scatterplot of Weight vs Length
with Fitted Regression Line
200
180
160
Weight (g)
140
120
100
Interpret the slope (b1) in the context of the setting.
Can we interpret the meaning of the Y intercept (bO) in this setting?
Definition: An extrapolation occurs when one uses the model to predict a y value
corresponding to an x value which is not within the range of the observed x’s.
Regression and Correlation Page 3
A Measure of Variability – sY|X
Once we fit a line to our data and use it to make predictions, it is natural to ask the question of
how far off our predictions are in general.
Definition: The residual standard deviation is
Caution: This is not to be confused with sY! Recall,
∑ni 1 yi ‐y 2
sY
n‐1
Scatterplot of Weight vs Length
with Fitted Regression Line
200
180
160
Weight (g)
140
120
100
55.0 57.5 60.0 62.5 65.0 67.5 70.0
Length (cm)
Determine and interpret sY|X for the regression of female Vipera bertis weight on length.
Regression and Correlation Page 4
The Linear Statistical Model
Definition: A conditional mean is the expected value of a variable conditional on another
variable.
Notation: μY|X
Defiinition: A conditional standard deviation is the standard deviation of a variable
conditional on another variable.
Notation: σY|X
The linear regression model of Y on X assumes
Y = μY|X + ε
where the conditional mean is linear with
μY|X = βO + β1X
and με = 0 and σε = σY|X
We use ______ to estimate βO, ______ to estimate β1, and __________ to estimate σY|X.
Then, we can estimate (or predict) μY|X=x at any X so that
µY|X x = bO + b1x
Assuming the linear model is appropriate here, find estimates of the mean and standard
deviation of female Vipera bertis weight at a length of 65 cm.
Should we estimate female Vipera bertis weight at a length of 75 cm? Why or why not?
Regression and Correlation Page 5
Inference on β1
Normal Error Model
In our discussion on the linear statistical model, we stated that the linear regression model of
Y on X assumes a linear conditional mean, with the errors having mean 0 and standard
deviation σY|X. To make inference on β1, we need to update the conditions on this model to
include a normal distribution on the errors.
Y = μY|X + ε
μY|X = βO + β1X ε ~ N(0, σ2Y|X )
Assumptions to check
εi must be independent
εi must be normally distributed
εi must have equal variance
εi must have mean zero
Regression and Correlation Page 6
How to Check Assumptions
εi independent
εi normally distributed
εi must have equal variance
εi must have mean zero
Looking at Residuals vs. Predicted (or X) Plot
Regression and Correlation Page 7
Check the assumptions (that can be checked) for the female Vipera bertis regression using
the plots below.
Regression and Correlation Page 8
Confidence Interval for β1
Under the normal error model, b1 is unbiased for β1 with
sY|X
SE b1
∑ni 1 xi ‐ x 2
This confidence interval uses a t critical point: tα/2,df=n‐2
The TI‐84 and 89 calculators have a menu option to compute this interval called LinRegTInt
found in STAT ‐> TESTS. For the TI‐83, use the LinRegTTest option and a t critical point. From
LinRegTTest, we can get the standard error of b1.
b1 b1
ts SE b1
SE b1 ts
Note: The test statistic, ts, returned by your calculator is the test statistic for a hypothesis test – not a critical
point! You’ll have to compute the critical point on your own or be given one. Then the CI is b1 ± tα/2,df=n‐2SE(b1)
Compute and interpret a 95% confidence interval for β1 in the female Vipera bertis regression.
(Use t.025,df=7 = 2.365)
Regression and Correlation Page 9
Hypothesis Test for β1
Similar to the development of the confidence interval for β1, we can use the t distribution to
conduct a hypothesis test for β1 with
b1
ts
SE b1
Under HO, ts ~ tdf=n‐2
We’ll test HO: _________________________________________________________________
HA: _________________________________________________________________
Use the LinRegTTest option in your calculator to conduct a test of hypothesis at the α = 0.05
significance level whether true mean female Vipera bertis weight tends to increase with
increase in female Vipera bertis length.
Regression and Correlation Page 10
Coefficient of Determination (r2)
Recall SS(resid) is a measure of the unexplained variablility in Y (the variation in Y not
explained by X through the regression model) and is given by
n
2
SS resid SSE yi ‐yi
Scatterplot of Weight vs Length
i 1 with Fitted Regression Line
200
Definition: SS(total) measures the total variability
180
in Y and is given by
n 160
Weight (g)
SS total SST yi ‐y 2 140
i 1
120
Definition: SS(reg) measures the variability in Y 100
that is explained by X through the regression
model and is given by 55.0 57.5 60.0 62.5 65.0 67.5 70.0
Length (cm)
SS reg SSR yi ‐y 2
i 1
Then, SS(total) = SS(reg) + SS(resid) or the total variability in Y is explained by the regression
model plus the unexplained residual variation.
Definition: The coefficient of determination is the ratio between the SS(reg) and SS(total) and
is given by
2
SS reg ∑ni 1 xi ‐x yi ‐y 2
coefficient of determination r n
SS total ∑i 1 xi ‐x 2 ∑ni 1 yi ‐y 2
and is interpreted as the proportion (or percentage) of variability in Y that is explained by the
linear regression of Y on X.
Find and interpret r2 for the regression of female Vipera bertis weight on length.
Regression and Correlation Page 11
Reading Regression Output from Standard Statistical Software
Example Fertility Enhancer Data
The data below concerns the prevalence of a so called fertility enhancer and the population of
Oldenburg Germany in thousands of people between 1930 and 1936.
The original data set can be found in: Ornithologische Monatsberichte, 44, No.2, Jahrgang, 1936, Berlin and 48, No.1,
Jahrang, 1940, Berlin, and Statistiches Jarbuch Deutscher Gemeinden, 27‐33, Jahrang, 1932‐1938, Gustav Fischer, Jena.
The actual data below is estimated from the graph in Box, Hunter, and Hunter’s Statistics for Experimenters.
X Pop (1000’s)
140 55.5
148 55.5
175 64.9
195 67.5
245 69
250 72
250 75.5
Regression and Correlation Page 12
Minitab Output DoStat Output
Regression Analysis: Population versus X
Analysis of Variance
Source DF SS MS F P
Regression 1 317.58 317.58 38.73 0.002
Residual Error 5 41.00 8.20
Total 6 358.58
Regression and Correlation Page 13
Correlation
The linear regression model assumes the X’s are measured with negligible error.
Think about the snake data here…the researcher measured length to predict weight!
Why not the other way around? I mean, if we go out to collect snake measurements,
I am not volunteering to get the length – I’m volunteering to hook the snake and
throw it in a bag to weigh it. But, if we tried to use the weight to predict length, the
variability in weight due to eating, pregnancy, etc. could lead to bad predictions of
length. For instance, a snake in our data set that just ate a mouse, would have a
shorter length that what would be predicted for a snake that actually weighed little
snake + food = big snake pounds.
In other words, μY|X is the mean of Y given X. We use this type of model to make predictions
of Y, based on our model for a given value of X.
For the situation where we’d like to make statements about the joint relationship of X and Y,
we’ll need for X and Y to both be random. When we’re interested in examining the joint
relationship of two random variables, we are interested in their joint distribution (the joint
distribution of two random variables is called a bivariate distribution).
Definition
The bivariate random sampling model views the pairs (Xi, Yi) as joint random variables, with
population means, μX, μY, population standard deviations, σX, σY, and a correlation parameter,
ρ.
In this model, ρ measures the level of dependence between two random variables, X and Y.
• ‐1 ≤ ρ ≤ 1
• ρ → ±1 ⇔ X & Y become more correlated
• ρ → 0 ⇔ X & Y become uncorrelated
We’ll measure the sample correlation coefficient, called r.
∑ni 1 xi ‐x yi ‐y
r
∑ni 1 xi ‐x 2 ∑n
i 1 yi ‐y 2
Notice what is hiding inside of r.
Regression and Correlation Page 14
Properties of r
• r = ± √r 2
• as n → ∞, E[ρ] ≈ r
s
• related to LS regression coefficients; b1 = r Y
sX
• test of HO: β1 = 0 numerically equivalent to test of HO: ρ = 0
b1 n‐2
ts r
SE b1 1‐r 2
Figure 12.14 Blood pressure and platelet calcium for 38 persons with normal blood pressure
Example 12.19
Is calcium in blood related to blood pressure?
Y = calcium concentration in blood platelets
X = blood pressure (average of systolic and diastolic)
What do you think r is for these data?
Regression and Correlation Page 15
Plots Depicting the Sensitivity of r to Outliers
Confidence Interval for ρ
We can build a (1 ‐ α) confidence interval on ρ if we extend the bivariate model to include
bivariate normality.
We’ll assume X ~ N(μX,σX2), Y ~ N(μY,σY2), with Corr(X, Y) = ρ. The following figures depict
bivariate standard normal distributions with different correlations.
ρ=0.0 ρ=0.8
Unfortunately, there is no easy way to build good intervals directly on ρ. Instead we transform
between different scales for ρ.
Regression and Correlation Page 16
Definition
1 1 r
The Fisher Z transform is defined as Z r ln
2 1‐r
And when under the bivariate normal, Z(r) ~ N(0, )
e2Z ‐1
The inverse function is r
e2Z 1
Compute a confidence interval for Z(ρ) and then invert, using the inverse formula, to get a
(1‐α) confidence interval on ρ.
1
CI on Z(ρ) Z r Zα
2 n‐3
/ /
CI on ρ
/ /
Compute a 95% interval for ρ, the true correlation of platelet calcium and blood pressure.
Regression and Correlation Page 17
Some Final Notes on Regression and Correlation
• use (conditional) regression analysis when prediction of Y from X is desired
o random sampling from the conditional distribution of Y|X is required if bO, b1, and
sY|X are to be viewed as estimates of the parameters βO, β1, and σY|X
o Y must be random and X need not be random
• use correlation analysis when association between X and Y is under study
o bivariate random sampling model is required if r is to be viewed as an estimate of
the population parameter ρ
o X and Y both must be random
• Always plot the data! Why? Because
• r is very sensitive to extreme observations and outliers, so BE CAREFUL!
• r is also known as the Pearson Product‐Moment Correlation Coefficient
• a distribution free version of r exists, known as Spearman’s Rank Correlation Coefficient
Regression and Correlation Page 18