You are on page 1of 2

Granger Causality

It is a statistical hypothesis test for determining whether one time series is useful in forecasting another, taking the lead-lag effect. Regression reflect Mere correlation but Clive Granger (Nobel Prize Winner) argued that there is an interpretation of a set of test as revealing something about causality (i.e. cause and effect relation). It tests the cause-effect relation from both sides. If X granger causes Y, then past values of X should contain information that helps predict Y above and beyond the information contained in past values of Y alone. Assumption: o Mean and variance of data does not change over time (data is stationary one) o The data must be explained by linear model. Limitation: if both X and Y are driven by a common 3rd factor, one might still accept the alternative hypothesis of granger causality.

Schwarz Criterion
Also known as Bayesian Information Criterion (BIC). It is a criterion for model selection among a finite set of models. It is based on likelihood function. When fitting a model, it is possible to increase the likelihood by adding parameters, by doing so may result in overfitting. The BIC resolves this problem by introducing a penalty term for the number of parameters in the model. It is developed by Gideon E. Schwarz. The lower it is, the better it is.

Akaike Information Criterion (AIC)


It is a measure of relative goodness of fits of a statistical model. It is developed by Hirotsugu Akaike. It provides a means for model selection but it can tell nothing about how well a model fits the data in the absolute sense. Given a set of candidate models for the data, the preferred model is the one with minimum AIC value.

Adjusted R Square
It is the explained variance in the dependent variable by the independent variables, after adjusting for the degree of freedom associated with the sum of squares. It increases only if the new variables improve the model more than would be expected by chance. If adjusted r square is significantly lower than r square, this normally means that some explanatory variable are missing.

In our model, difference between R square and Adjusted R square is decreasing signifying the improvement in the model.

Hannan-Quinn Criterion
It is a criteria for model selection as an alternative to BIC and AIC. Lower it is , the better it is.

Durbin-Watson Statistics
It is a test statistics used to detect the presence of autocorrelation (a relationship between values separated from each other by a given time lag) in the residuals from a regression analysis. It is developed by James Durbin and Geoffrey Watson. H0=that the error are serially independent (not autocorrelated) H1= that the errors follows a first order autoregressive process. If D=Durbin Watson score o D=2: no auto correlation o D<2, there is a evidence of positive serial correlation. o D lies between 0 and 4 As a rough rule of thumb , if D is less than 1, there may be cause of alarm. Small values of D indicate successive error terms are close in value to one another. It assumes that error terms are stationary and normally distributed with mean Zero.

Root Mean Squared Error


It is a measure of the difference between values predicted by a model or an estimator and the values actually observed. It is a good measure of accuracy. The individual difference between estimated and actual values are residuals, and RMSE serves the aggregate them into a single measure of predictive power.

Theil Inequality Coefficient


Also known as Theils U, provide a measure of how well a time series of estimated values compares to a corresponding time series of observed values. This is useful for comprising different forecast method. The closer the value of U is to Zero, the better the forecast method. A value of 1 means the forecast is no better than a nave guess.

You might also like