Professional Documents
Culture Documents
discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/235922660
CITATIONS
DOWNLOADS
VIEWS
2,587
530
1 AUTHOR:
Tatjana Stadnitski
Universitt Ulm
11 PUBLICATIONS 35 CITATIONS
SEE PROFILE
Original Article
Stationarity
A process is said to be stationary if its mean, variance, and
covariance do not change over time. If that is not the case,
we deal with a nonstationary process. If the goal of analysis
consists in measuring the effects of an intervention, as in an
interrupted time-series experiment, or in forecasting future
values of the series, stationarity (i.e., stability) of the data
under consideration is required.
A great deal of psychological time series has a timevarying mean, a time-varying variance, or both. For further
analysis, such time series must be transformed to make
them stationary. The transformation method depends on
the cause of nonstationarity. The consequences of a false
treatment can be rather serious; unfortunately, this is not
emphasized in the time-series textbooks used among psychologists. Some descriptions even suggest that two popular
methods for stabilizing nonstationary series, differencing
and ordinary least square regression, are interchangeable
and that the choice of transformation technique is simply
a matter of researchers preference (see e.g., Warner,
1998, p. 39). One of the objectives of this paper is to
emphasize the importance of proper stationarity transformation for empirical time-series research.
Methodology 2010; Vol. 6(2):8392
DOI: 10.1027/1614-2241/a000009
84
Yt = Yt1 + ut
ut IID(0, 2)
ACF
-2
1.0
1.0
0.5
0.5
0.0
-6
-1.0
-1.0
1 2 3 4 5 6 7 8 9 10
1 12 23 34 45 56 67 78 89 100
Yt = 2.0 + Yt1 + ut
0.0
-0.5
-0.5
-4
PACF
1 2 3 4 5 6 7 8 9 10
Lag
Lag
ut IID(0, 2)
30
15
ACF
20
10
1.0
1.0
0.5
0.5
0.0
-1.0
-1.0
1 12 23 34 45 56 67 78 89 100
1 2 3 4 5 6 7 8 9 10
Yt = 0.1t + 0.2Yt 1 + at
0.0
-0.5
-0.5
PACF
25
1 2 3 4 5 6 7 8 9 10
Lag
at = 0.5at 1 + u t
Lag
ut IID(0, 2)
12
1.0
0.5
0.5
ACF
3
0
PACF
1.0
0.0
-3
-1.0
1 12 23 34 45 56 67 78 89 100
0.0
-0.5
-0.5
-1.0
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
Lag
Lag
Nonstationary Processes
Figure 1 shows the three most common nonstationary processes and their autocorrelation and partial autocorrelation
functions (ACF and PACF, respectively). The process
1
Y t Y t1 ut with ut IIDN 0; r2
is called a pure random walk. The mean of this process is
equal to its initial value but its variance (tr2) increases
indenitely over time. A pure random walk process can
also be represented as the sum of random shocks
X
ut :
2
Yt
As a result, the impact of a particular shock does not dissipate, and the random walk remembers the shock forever.
That is why a random walk is said to have an innite
memory. If a constant term is present in the equation
Y t a Y t1 ut ;
Y t Y t Y t1
Lag operator L: LYt = Yt1, L2Yt = Yt2 and so on. If (1 L) = 0, we obtain, L = 1, hence the name unit root.
with at 0:5at1 ut ;
Consequences of Inappropriate
Transformations
Chan, Hayya, and Ord (1977) analyzed the effects of incorrect transformation on the autocorrelation and the power
spectral density functions. These authors showed that the
ACF of residuals from linear regression of a random walk
series on time are not stationary and tend to exhibit cycles
of increasing length and amplitude around a tted trend
line as sample size gets larger. That is why erroneous detrending of DS series is also called underdifferencing. The
residuals of inappropriately differenced TS series follow a
noninvertible moving-average process. This is known as
overdifferencing. There has been some debate in the literature arguing that overdifferencing is a less serious error
than underdifferencing (see Maddala & Kim, 1998, for an
overview).
Nelson and Kang (1981, 1984) detected articial periodicity in inappropriately detrended time series and presented
2010 Hogrefe Publishing
85
with
10
86
data series;
y=0; y1=0; a=0; u=0; a1=0; teta=0.6;
keep t y;
do t=50 to 100;
u=rannor(59837);
y=y1+a; a=uteta*a1;a1=u; y1=y;
if t gt 0 then output;
end;
run;
When testing for unit roots, it is crucial to specify the null
and alternative hypotheses appropriately. For example, if
the data are not growing, the hypotheses should reect this.
Therefore the rst step of analysis is to examine if the
observed series exhibits an increasing or decreasing trend.
Figure 2a shows that there is no apparent positive or negative trend in the generated series. Slow decay of the ACF
suggests that the process may be nonstationary; so we have
to decide between the H0 yt is I(1) without drift and the
H1 yt is I(0) with nonzero mean. Hence the test regression
is DYt = a + dYt1 + at.
An important practical issue for the correct implementation of the ADF test is the specication of the order of serial
i1
a 6.00
2.00
1.0
0.5
0.5
PACF
ACF
4.00
1.0
0.0
-1.0
-1.0
1
12 23 34 45 56 67 78 89 100
0.0
-0.5
-0.5
0.00
1 2 3 4 5 6 7 8 9 10 1112
1 2 3 4 5 6 7 8 9 10 1112
Lag
Lag
-2.00
1.0
1.0
0.5
0.5
PACF
0.00
ACF
Y(1)
2.00
0.0
-0.5
-0.5
-1.0
-1.0
-4.00
2 13 24 35 46 57 68 79 90
0.0
1 2 3 4 5 6 7 8 9 10 1112
Lag
1 2 3 4 5 6 7 8 9 10 1112
Lag
87
Table 2. SAS output for the ADF unit root test for the
simulated yt = yt1 + at 0.6at1 + ut series
Type
Single mean
Lag
Pr < s
1.89
.3368
13
|t|
Pr(<|t|)
AIC
BIC
0.56
0.85
1.8
1.4
.5781
.3954
.0747
.3025
9.75
8.28
5.99
7.17
14.19
12.20
9.48
10.28
88
89
Table 3. Percentage of signicant decisions of the ADF test at the nominal 5% level of signicance for DS series with
DGP: Yt = a + Yt1 + at; T = 100; 1,000 replications
at/Lag
Model II
Model III
H0: d = 0 (q = 1) a 5 0
H0: d = 0 (q = 1) a 5 0 b 5 0
a = .2
(0, 0)
(1, 0)
(0, 1)
2.6
4.2
0.2
2.4
4.0
0.8
2.0
4.2
0.7
2.3
4.7
0.6
3.0
5.1
0.8
2.7
4.7
0.9
4.5
5.5
29.0
5.7
5.6
12.0
5.0
5.5
7.8
4.6
4.5
6.2
4.5
5.1
6.1
5.3
5.5
4.7
a = .5
(0, 0)
(1, 0)
(0, 1)
0.6
1.9
0
0.2
1.6
0.4
0.4
1.7
0.5
0.6
2.1
0.3
0.8
2.5
0.5
0.8
2.3
0.2
4.5
5.5
29.0
5.7
5.6
12.0
5.0
5.5
7.8
4.6
4.5
6.2
4.5
5.1
6.1
5.3
5.5
4.7
Table 4. Percentage of signicant decisions of the ADF test at the nominal 5% level of signicance for TS series with
DGP: Yt = 0.1t + qYt1 + at; T = 100; 1,000 replications
Model II
Model III
H0: d = 0 (q = 1) a 5 0
H0: d = 0 (q = 1) a 5 0 b 5 0
at/Lag
q = 0.0
(0, 0)
(1, 0)
(0, 1)
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
100
99.9
99.9
100
97.2
99.0
99.1
89.3
98.0
96.4
75.7
89.9
86.4
62.6
79.1
69.0
47.3
62.7
q = 0.2
(0, 0)
(1, 0)
(0, 1)
0
0
0.2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
100
99.5
99.9
100
94.5
98.2
98.4
84.6
96.6
92.1
69.6
84.4
80.4
57.4
72.8
63.7
43.4
56.9
q = 0.5
(0, 0)
(1, 0)
(0, 1)
0
1.9
2.8
0
0.2
0
0
0.3
0
0
0.1
0
0
0
0
0
0
0
99.9
92.0
99.7
97.2
78.9
87.1
89.3
68.2
86.4
75.7
52.1
65.9
62.6
43.6
56.8
47.3
32.6
44.8
q = 0.8
(0, 0)
(1, 0)
(0, 1)
0
11.5
11.9
0
7.9
2.2
0
6.0
2.8
0
4.9
1.9
0
4.5
2.7
0
3.5
1.3
56.8
38.6
76.2
41.8
30.7
28.0
35.6
27.2
39.5
29.8
21.4
24.8
24.0
18.6
25.8
18.0
15.7
20.9
Empirical Demonstration
To illustrate the ADF test procedure for trend cases, timeseries raw data of trafc fatalities for New York State from
January 1951 to April 1960 were employed. The data are
2010 Hogrefe Publishing
90
a 5.00
0.5
ACF
2.00
0.5
PACF
1.0
3.00
4.00
0.0
0.00
0.0
-0.5
-0.5
1.00
-1.0
-1.0
1 2 3 4 5 6 7 8 9 10 1112
1 12 23 34 45 56 67 78 89 100
1.0
1 2 3 4 5 6 7 8 9 10 11 12
Lag
Lag
1.0
1.0
0.00
0.5
0.5
-1.00
PACF
1.00
ACF
Y(1)
b 2.00
0.0
-2.00
-0.5
-3.00
-1.0
2 13 24 35 46 57 68 79 90
0.0
-0.5
-1.0
1 2 3 4 5 6 7 8 9 1011 12
1 2 3 4 5 6 7 8 9 1011 12
Lag
Lag
c 2.00
-1.00
1.0
1.0
0.5
0.5
PACF
0.00
ACF
Res
1.00
0.0
-0.5
-2.00
-0.5
-1.0
1 12 23 34 45 56 67 78 89 100
-1.0
1 2 3 4 5 6 7 8 9 101112
1 2 3 4 5 6 7 8 9 1011 12
Lag
0.0
Lag
|t|
AIC
BIC
RMSE
1.52
0.53
1.2
0.51
1.39
0.9
111.00
112.78
116.68
119.11
123.02
124.49
107.10
109.30
113.58
116.31
120.49
122.15
0.526
0.527
0.522
0.522
0.517
0.519
Table 5 summarizes the results from the Ng-Perron algorithm for lags 61. The results indicate that there is no need
of lagged differences in the test regression. (The absolute value of the t statistics is < 1.6 for all lags tested. The model
with p = 1 has the smallest AIC and BIC. Note that it does
not matter that the AIC or BIC values were all negative in
this example. The values are simply supposed to be compared to each other, meaning that we should pick the model
with the smallest actual (not absolute) value of AIC or BIC.)
The following SAS statements perform the ADF test for
lags 02.
proc arima data= newyork;
identify var=y stationarity=(adf=2);
run;
Table 6. SAS output for the ADF unit root test for the
New York trafc fatalities series
Lags
Pr < s
Single mean
0
1
2
6.07
3.39
3.04
< .0001
.0137
.0346
Trend
0
1
2
10.16
6.32
6.09
< .0001
< .0001
< .0001
Type
14
15
91
References
Ashley, R., & Verbrugge, R. J. (2004). To difference or not to
difference: A Monte Carlo investigation of inference in vector
autoregressive models. VPI Economic Department Working
Paper. Retrieved from http://ashleymac.econ.vt.edu/working_
papers/9915.pdf.
Ashley, R., & Verbrugge, R. J. (2006). Comments on A critical
investigation on detrending procedures for non-linear processes. Journal of Macroeconomics, 28, 192194.
Ayat, L., & Burridge, P. (2000). Unit root tests in the presence of
uncertainty about the non-stochastic trend. Journal of
Econometrics, 95, 7196.
Box, G. E. P., & Pierce, W. A. (1970). Distribution of residual
autocorrelations in autoregressive-integrated moving average
time series models. Journal of American Statistical Association, 65, 15091526.
Chan, K. H., Hayya, J. C., & Ord, J. K. (1977). A note on trend
removal methods: The case of polynomial regression versus
variate differencing. Econometrica, 45, 737744.
Dagum, E. B., & Giannerini, S. (2006). A critical investigation
on detrending procedures for non-linear processes. Journal of
Macroeconomics, 28, 175191.
DeJong, D. N., Nankervis, J. C., Savin, N. E., & Whiteman, H.
(1992). The power problems of unit root tests in time series
with autoregressive errors. Journal of Econometrics, 53,
323343.
Dickey, D. A. (1984). Power of unit root tests. Proceedings of
Business and Economic Statistics Section of ASA, pp. 489
493.
Dickey, D. A., & Fuller, W. A. (1981). Likelihood ratio statistics
for autoregressive time series with a unit root. Econometrica,
49, 10571072.
Diebold, F. X., & Kilian, L. (2000). Unit root tests are useful for
selecting forecasting model. Journal of Business and
Economic Statistics, 18, 265273.
Diebold, F. X., & Senhadji, A. S. (1996). The uncertain unit root
in real GNP: Comment. American Economic Review, 86,
12911298.
Elder, J., & Kennedy, P. E. (2001a). F versus t test for unit roots.
Economic Bulletin, 3, 16.
Elder, J., & Kennedy, P. E. (2001b). Testing for unit roots: What
should students be taught? Journal of Economic Education,
32, 137146.
Methodology 2010; Vol. 6(2):8392
92
Tetiana Stadnytska
Department of Psychology
University of Heidelberg
Hauptstrasse 47-51
69117 Heidelberg
Germany
Tel. +49 6221 547345
E-mail Tetiana.Stadnytska@psychologie.uni-heidelberg.de