You are on page 1of 37

The Simple Regression Model

V c Hong V
University of Economics HCMC

June 2015

V c Hong V (UEH)

Applied Econometrics

June 2015

1/1

Some Terminology

In the simple linear regression model, where y = 0 + 1 x + u,


we typically refer to y as the
depedent variable, or
left-hand side variable, or
explained variable, or
regressand

V c Hong V (UEH)

Applied Econometrics

June 2015

2/1

Some Terminology (cont)

In the simple linear regression of y on x, we typically refer x as


the
independent varialbe, or
right-hand side variable, or
explanatory variable, or
regressor, or
covariate, or
control variable.

V c Hong V (UEH)

Applied Econometrics

June 2015

3/1

A Simple Assumption
The average value of u, the error term, in the population is 0.
That is,
E (u) = 0
This is not a restrictive assumption, since wa can always use 0
to normalize E (u) to 0.
We need to make a crucial assumption about how u and x are
related.
We want it to be the case that knowing something about x does
not give us any information about u, so that they are completely
unrelated. That is, that
E (u|x) = E (u) = 0, which implies
E (y |x) = 0 + 1 x.
V c Hong V (UEH)

Applied Econometrics

June 2015

4/1

E (y |x) as a linear function of x, where for any x the distribution of y


is centered about E (y |x).

V c Hong V (UEH)

Applied Econometrics

June 2015

5/1

Ordinary Least Squares

Basic idea of regression is to estimate the population parameters


from a sample.
Let {xi , yi )i = 1, . . . , n} denote a random sample of size n from
the population.
For each observation in this sample, it will be the case that
yi = 0 + 1 xi + uu

V c Hong V (UEH)

Applied Econometrics

June 2015

6/1

Population regression line, sample data points and the associated


error terms.

V c Hong V (UEH)

Applied Econometrics

June 2015

7/1

Deriving OLS Estimates


to derive the OLS estimates we need to realize that our main
assumption of E (u|x) = E (u) = 0 also implies that
Cov (x, u) = E (xu) = 0
Why? Remenber from basic probability that
Cov (X , Y ) = E (XY ) E (X )E (Y )
We can write our 2 restrictions just in terms of
x, y , 0 , and1 , since u = y 0 + 1 x
E (y 0 1 x) = 0, and
E [x(y 0 1 x)] = 0
These are called moment restrictions.

V c Hong V (UEH)

Applied Econometrics

June 2015

8/1

Deriving OLS using M.O.M


The method of moments approach to estimation implies
imposing the population moment restrictions on the sample
moments.
What does this mean? Recall that for E (X ), the mean of a
population distribution, a sample estimator of E (X ) is simply
the arithmetic mean of the sample.
We want to choose values of the parameters that will ensure
that the sample versions of our moment restrictions are true
1 Pn
(yi 0 1 xi ) = 0
n i=1
1 Pn
xi (yi 0 1 xi ) = 0
n i=n
V c Hong V (UEH)

Applied Econometrics

June 2015

9/1

More Derivation of OLS


Given the definition of a sample mean, and properties of
summation, we can rewrite the first condition as follows
y = 0 + 1 x
0 = y 1 x
n
X
i=1
n
X
i=1
n
X

or

xi (yi (
y 1 x) 1 xi ) = 0
xi (yi y) = 1

n
X

(xi x)(yi y) = 1

i=1
V c Hong V (UEH)

xi (xi x)

i=1
n
X

(xi x)2

i=1
Applied Econometrics

June 2015

10 / 1

Solve for the OLS estimated slope is

1 =
provided that

Pn

i=1 (xi

Pn

(x x)(yi
i=1
Pn i
)2
i=1 (xi x

y)

x)2 > 0
0 = y 1 x

V c Hong V (UEH)

Applied Econometrics

June 2015

11 / 1

Summary of OLS slope estimate


The slope estimate is the sample covariance between x and y
divided by the sample variance of x.
If x and y are positively correlated, the slope will be positive.
If x and y are negatively correlated, the slope will be negative.
Only need x to vary in our sample.
Intuitively, OLS is fitting a line through the sample points such
that the sum of squared residuals is as small as possible, hence
the term least squares.
The residual, u,
is an estimate of the error term, u, and is the
difference between the fitted line (sample regression function)
and the sample point.

V c Hong V (UEH)

Applied Econometrics

June 2015

12 / 1

Sample regression line, sample data points and the associated


estimated error terms.

V c Hong V (UEH)

Applied Econometrics

June 2015

13 / 1

Alternative approach to derivation


Given the intuitive idea of fitting a line, we can set up a formal
minimize problem
That is, we want to choose our parameters such that we
minimize
the P
following:
Pn
2
) = ni=1 (yi 0 1xi )2
i=1 (u
If one uses calculus to solve the minimization problem for the
two parameters you obtain the following first order condtions,
which are the same as we obtained before, multiplied by n.
n
X
(yi 0 1xi ) = 0
i=1
n
X

xi (yi 0 1 xi ) = 0

i=1
V c Hong V (UEH)

Applied Econometrics

June 2015

14 / 1

Algebraic Properties of OLS


The sum of the OLS residuals is zero
Thus, the sample average of the OLS residuals is zero as well
The sample covariance between the regressors and the OLS
residuals is zero
The OLS regression line always goes through the mean of the
sample.
Pn
Pn
ui
i = 0 and thus i=1 = 0
i=1 u
n
Pn
i = 0
i=1 xi u
y = 0 + 1 x

V c Hong V (UEH)

Applied Econometrics

June 2015

15 / 1

More terminology

We can think of each observation as being made up of an


explained part, and an unexplained part, yi = yi + ui . We then
define the following:
Pn
)2 is the total sum of squares (SST)
i=1 (yi y
Pn
(yi y)2 is the explained sum of squares (SSE)
Pi=1
n
i 2 is the residual sum of squares (SSR)
i=1 u
Then SST = SSE + SSR

V c Hong V (UEH)

Applied Econometrics

June 2015

16 / 1

Proof that SST = SSE + SSR

(yi y)2 =

[(yi yi ) + (yi y)]2

ui (yi y) = 0

[ui + (yi y)]2


X
X
X
=
ui 2 + 2
ui 2 (yi y) +
(yi y)2
X
= SSR + 2
ui 2 (yi y) + SSE
and we know that

V c Hong V (UEH)

Applied Econometrics

June 2015

17 / 1

Goodness-of-fit

How do we think about how well our sample regression line fit
our sample data?
Can compute the fraction of the total sum of squares (SST) that
is explained by the model, call this the R-squared of regression
SSE
SSR
R2 =
=1
SST
SST

V c Hong V (UEH)

Applied Econometrics

June 2015

18 / 1

Unbiasedness of OLS
Assume the population model is linear in parameters as
y = 0 + 1 x + u
Assume we can use a random sample of size n,
{(xi , yi ) : i = 1, 2, . . . , n}, from the population model. Thus we
can write the sample model yi = 0 + 1 xi + ui
Assume E (u|x) = 0 and thus E (ui |xi ) = 0
Assume there is variation in the xi
In order to think about unbiasedness, we need to rewrite our
estimator in term of the population parameters.
Start with a simple rewrite of the formula as
P
(xi x)yi
1 =
, where
sx2
P
sx2 = (xi x)2
V c Hong V (UEH)

Applied Econometrics

June 2015

19 / 1

Unbiasedness of the OLS

X
(xi x)(0 + 1 xi + ui )
X
X
X
=
(xi x)0 +
(xi x)1 xi +
(xi x)ui
X
X
X
= 0
(xi x) + 1
(xi x)xi +
(xi x)ui

(xi x)yi =

P
(xi x) = 0
P
(xi x)xi = 0

V c Hong V (UEH)

Applied Econometrics

June 2015

20 / 1

Unbiasedness of OLS (cont)

so, the numerator can be written as 1 sx2 +


thus
P
(xi x)ui

1 = 1 +
sx2
let di = (xi x), so that
1 P
1 = 1 + ( 2 ) di ui , then
sx
1 P
E (1 ) = 1 + ( 2 ) di E (ui ) = 1
sx

V c Hong V (UEH)

Applied Econometrics

P
(xi x)ui , and

June 2015

21 / 1

Unbiasedness Summary

The OLS estimates of 1 and 0 are unbiased


Proof of unbiasedness depends on our 4 assumptions - if any
assumption fails, then OLS is not neccessarily unbiased
Remember unbiasedness is a description of the estimator - in a
given sample we may be "hear" or "far" from the true
parameter.

V c Hong V (UEH)

Applied Econometrics

June 2015

22 / 1

Variance of the OLS Estimators

Now we know that sampling distribution of our estimate is


centered around the true parameter
We want to think about how spead out this distribution is
much easier to think about this variance under an additional
assumption, so
Assume Var (u|x|) = 2 (Homoskedasticity)

V c Hong V (UEH)

Applied Econometrics

June 2015

23 / 1

Variance of OLS (cont)

Var(u|x) = E (u 2 |x) [E (u|x)]2


E (u|x|) = 0, so 2 = E (u 2 |x) = E (u 2 ) = Var(u)
Thus 2 is also the unconditional variance, called the error
variance
, the square root of the error variance is called the standard
deviation of the error
Can say: E (y |x) = 0 + 1 x and Var(y |x) = 2

V c Hong V (UEH)

Applied Econometrics

June 2015

24 / 1

Homoskedastic Case

V c Hong V (UEH)

Applied Econometrics

June 2015

25 / 1

Heteroskedastic Case

V c Hong V (UEH)

Applied Econometrics

June 2015

26 / 1

Variance of OLS (cont)

1 X
di ui )
Var (1 ) = Var (1 + 2
sx
X
1
= ( 2 )2 Var (
di ui )
sx
1 X 2
= ( 2 )2
di Var (ui )
sx
1 X 2 2
1 X 2
di = 2 ( 2 )2
di
= ( 2 )2
sx
sx
1
2
= 2 ( 2 )2 = 2 = Var (1 )
sx
sx

V c Hong V (UEH)

Applied Econometrics

June 2015

27 / 1

Variance of OLS Summary

The larger the error variance, 2 , the larger the variance of the
slope estimate.
The larger the variability in the x, the smaller the variance of the
slope estimate.
As a result, a larger sample size should decrease the variance of
the slope estimate.
Problem that the error variance is unknown.

V c Hong V (UEH)

Applied Econometrics

June 2015

28 / 1

Estimating the Error Variance

We dont know what the error variance, , is, because we dont


observe the errors, ui .
What we observe are the residuals, ui .
We can use the residuals to form an estimate of the error
variance.

V c Hong V (UEH)

Applied Econometrics

June 2015

29 / 1

Error Variance Estimate (cont)

ui = yi 0 1 xi
= (0 + 1 xi + ui ) 0 1 xi
= ui (0 0 ) (1 1 )
Then , an unbiased estimator of 2 is

2 =

V c Hong V (UEH)

P 2
1
ui = SSR/(n 2)
(n 2)

Applied Econometrics

June 2015

30 / 1

Error Variance Estimate (cont)

sx

2 : Standard error of the regression recall that sd()

if we substitute
for then we have the standard error of 1 .
pP
se(1 ) =
/
(xi x)2

V c Hong V (UEH)

Applied Econometrics

June 2015

31 / 1

The Gauss-Markov Assumption (GM) for Simple


Regression
Assumption SLR. 1

Linear in Parameters

In the population model, the dependent variable, y , is related to


the independent variable, x, and the error (or disturbance), u, as
y = 0 + 1 x + u
where 0 and 1 are the population intercept and slope parameters, respectively.

V c Hong V (UEH)

Applied Econometrics

June 2015

32 / 1

The GM Assumption (cont)

Assumption SLR. 2

Random Sampling

We have a random sample of size n, {xi , i = 1, 2, . . . , n}, following the population model in Assumption SLR. 1

V c Hong V (UEH)

Applied Econometrics

June 2015

33 / 1

The GM Assumption (cont)

Assumption SLR. 3

Sample Variation in the Explanatory

The sample outcomes on x, namely, {xi , i = 1, 2, . . . , n}, are


not all the same value.

V c Hong V (UEH)

Applied Econometrics

June 2015

34 / 1

The GM Assumption (cont)

Assumption SLR. 4

Zero Conditional Mean

The error u has the same variance given any value of the
explanatory variable. In other words,
E (u|x) = 0.

V c Hong V (UEH)

Applied Econometrics

June 2015

35 / 1

The GM Assumption (cont)

Assumption SLR. 5

Homoskedasticity

The error u has the same variance given any value of the
explanatory variable. In other words,
Var (u|x) = 2 .

V c Hong V (UEH)

Applied Econometrics

June 2015

36 / 1

Stata code

reg lwage exp wks occ


Source |
SS
df
MS
-------------+-----------------------------Model | 142.774178
3 47.5913928
Residual | 744.130723 4161 .178834589
-------------+-----------------------------Total | 886.904902 4164 .212993492

Number of obs
F( 3, 4161)
Prob > F
R-squared
Adj R-squared
Root MSE

=
=
=
=
=
=

4165
266.12
0.0000
0.1610
0.1604
.42289

-----------------------------------------------------------------------------lwage |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------exp |
.0100694
.0006
16.78
0.000
.0088931
.0112456
wks |
.0058775
.0012784
4.60
0.000
.0033711
.0083839
occ |
-.311163
.0131532
-23.66
0.000
-.3369502
-.2853758
_cons |
6.360351
.062024
102.55
0.000
6.238751
6.481951
------------------------------------------------------------------------------

V c Hong V (UEH)

Applied Econometrics

June 2015

37 / 1

You might also like