You are on page 1of 4

To see whether the estimate differs significantly from the value 1, an upper one-sided t-test

was undertaken. Under the null hypothesis : = 1, the value of the test statistic is = 0.44 which is
smaller than the critical value on a 5% significance level
( = 1.67). Hence, the null hypothesis cannot be rejected. In conclusion, the data do not speak
against the hypothesis that the GM stock earns an excess return equal to that of the market
portfolio.
Example 2:
In the theoretical CAPM model, the excess return of the GM stock is directly proportional to the
excess return of the market portfolio. A test of the null hypothesis
: = 0 in the regression model = + + is therefore a test of the validity of the CAPM model. OLS
estimation yields an estimate = 0.85 and a t-ratio
= 0.71 with p-value = 0.48 (see OLS output in Table 1).
Since > 0.1 and /2 > 0.1, the null hypothesis cannot be rejected with either a two-sided or lower
one-sided t-test on a 10% significance level. Hence, the data support the CAPM model.
Frequent misunderstandings:
The exercise of computing the average of the OLS residuals was just meant to illustrate a
property of OLS estimation: OLS estimation ensures that the sum of residuals equals zero if a
constant is included in the regression equation.
The result is not relevant in a paper.
Dont mix up errors and residuals! The true errors are unobservable; the residuals are the
estimated errors. Each residual at time point t is computed as difference between actual value and
fitted value of the dependent variable at time point t.
The unadjusted coefficient of determination ( !) measures the percentage of variation in the
actual values that can be explained by the variation in the fitted values ".
Note: a high ! does not mean that the model is correctly specified nor that the independent
variable is causal for the dependent variable. Dont over-interpret this value!
You may assume that the reader is familiar with the construction of an F-test or t-test and with
the OLS procedure.
A statistically significant estimate usually means that the estimate is significantly different
from zero.

The standard error of the regression is the square root of the sample variance of the residuals. It
is measured in the same units as the dependent variable, hence the size of the standard error of
the regression is not normalized.

Model 1: OLS, using observations 2000:07-2014:07 (T = 169)


Dependent variable: er_Telcm

const
Mkt_RF
SMB
HML
Mean dependent var
Sum squared resid
R-squared
F(3, 165)
Log-likelihood
Schwarz criterion
rho

Coefficient
0.0311218
1.06076
0.30616
0.159014

Std. Error
0.233589
0.0535971
0.0950321
0.0762263

0.141657
1448.181
0.719900
141.3583
421.3206
863.1607
0.006802

t-ratio
-0.1332
19.7913
-3.2217
-2.0861

S.D. dependent var


S.E. of regression
Adjusted R-squared
P-value(F)
Akaike criterion
Hannan-Quinn
Durbin-Watson

p-value
0.89417
<0.00001
0.00154
0.03851

***
***
**

5.547536
2.962576
0.714807
2.22e-45
850.6411
855.7218
1.970185

1. Under our observation, we dont see a clear autocorrelation of residuals of Linear


regression Function using OSL model.

2. Done
3. Added series of lags of residuals
4. Based on the scatter-plot of residual and its first lag, we dont see any positive or
negative autocorrelation.
5. Durbin-Watson statistic nearly equals to 2, so theres little evidence of autocorrelation.
6. Durbin-Watson statistic = 1.97018
p-value = 0.415674
Regresson model:
=

Ho:
H1:
Based on the p_value, we cannot reject the hypothesis, and theres little evidence of
autocorrelation of residual.
7. Breusch_Godfrey test:
8. Model 3: OLS, using observations 2001:07-2014:07 (T = 157)
9. Dependent variable: uhat_telcm
10.
Coefficient
Std. Error
t-ratio
p-value
const
0.0325065
0.210956
0.1541
0.87776
Mkt_RF
0.0550356
0.0542615
1.0143
0.31219
SMB
0.0603617 0.0958493
-0.6298
0.52987
HML
0.092755
0.0897204
1.0338
0.30299
uhat_telcm_1
0.049873
0.0837775
0.5953
0.55259
uhat_telcm_2
0.0318388
0.0838669
0.3796
0.70479
uhat_telcm_3
0.0821667
0.0807825
1.0171
0.31083
uhat_telcm_4
0.157559
0.079874
-1.9726
0.05050 *
uhat_telcm_5
0.0454532 0.0810266
-0.5610
0.57571
uhat_telcm_6
0.141285
0.0767059
1.8419
0.06759 *
uhat_telcm_7
0.0669618 0.0745709
-0.8980
0.37074
uhat_telcm_8
0.0334666
0.0758154
0.4414
0.65958
uhat_telcm_9
0.15107
0.075228
-2.0082
0.04654 **
uhat_telcm_10
0.0104565 0.0760187
-0.1376
0.89079
uhat_telcm_11
0.00600845 0.0730837
-0.0822
0.93459
uhat_telcm_12
0.14354
0.0714096
-2.0101
0.04633 **
11.
Mean dependent var
0.061613
S.D. dependent var
2.657898
Sum squared resid
948.4303
S.E. of regression
2.593541
R-squared
0.139394
Adjusted R-squared
0.047841
F(15, 141)
1.522540
P-value(F)
0.104671

363.9605
808.8209
0.013341

Log-likelihood
Schwarz criterion
rho

Akaike criterion
Hannan-Quinn
Durbin-Watson

8.
Breusch-Godfrey test for first-order autocorrelation
OLS, using observations 2000:07-2014:07 (T = 169)
Dependent variable: uhat
coefficient

std. error t-ratio

p-value

----------------------------------------------------------const
Mkt_RF

0.000208841 0.234307

0.0008913 0.9993

0.000378678 0.0539329

0.007021

0.9944

SMB

0.000725982 0.0956795

0.007588

0.9940

HML

0.000104583 0.0764659

0.001368

0.9989

uhat_1

0.00687387

0.0785463

0.08751

0.9304

Unadjusted R-squared = 0.000047


Test statistic: LMF = 0.007659,
with p-value = P(F(1,164) > 0.00765862) = 0.93
Alternative statistic: TR^2 = 0.007892,
with p-value = P(Chi-square(1) > 0.00789175) = 0.929
Ljung-Box Q' = 0.0079386,
with p-value = P(Chi-square(1) > 0.0079386) = 0.929

759.9210
779.7810
2.023987

You might also like