Professional Documents
Culture Documents
corr(Yt) = R0 = D−1Γ0D−1
where D is an (n × n) diagonal matrix with j th diag-
onal element (γ 0jj )1/2 =var(yjt)1/2.
The parameters μ, Γ0 and R0 are estimated from data
(Y1, . . . , YT ) using the sample moments
T
1 X p
Ȳ = Yt → E[Yt] = μ
T t=1
1 XT
p
Γ̂0 = (Yt−Ȳ)(Yt−Ȳ)0 → var(Yt) = Γ0
T t=1
p
R̂0 = D̂−1Γ̂0D̂−1 → corr(Yt) = R0
where D̂ is the (n×n) diagonal matrix with the sample
standard deviations of yjt along the diagonal. The
Ergodic Theorem justifies convergence of the sample
moments to their population counterparts.
Cross Covariance and Correlation Matrices
• If γ −k
ij 6= 0 for some k > 0 then yit is said to lead
yjt.
• It is possible that yit leads yjt and vice-versa. In
this case, there is said to be feedback between
the two series.
All of the lag k cross covariances and correlations are
summarized in the (n × n) lag k cross covariance and
lag k cross correlation matrices
1 X T
Γ̂k = (Yt − Ȳ)(Yt−k − Ȳ)0
T t=k+1
R̂k = D̂−1Γ̂k D̂−1
Multivariate Wold Representation
Bartlett = 1 − j
wj,T
MT + 1
Vector Autoregression Models
Π(L)Yt = c + εt
Π(L) = In − Π1L − ... − ΠpLp
The VAR(p) is stable if the roots of
Yt = ΠYt−1 + εt
à ! à !à ! à !
y1t π 11 π 12 y1t−1 ε1t
= +
y2t π 21 π 22 y2t−1 ε2t
μ = (In − Π1 − · · · − Πp)−1c
The mean-adjusted form of the VAR(p) is then
Yt − μ = Π1(Yt−1 − μ) + Π2(Yt−2 − μ) + · · ·
+Πp(Yt−p − μ) + εt
The basic VAR(p) model may be too restrictive to
represent sufficiently the main characteristics of the
data. The general form of the VAR(p) model with
deterministic terms and exogenous variables is given
by
Π(L)Yt = c + εt
Π(L) = In − Π1L − ... − ΠpLp
Since Yt is stationary, Π(L)−1 exists so that
Yt = Π(L)−1c + Π(L)−1εt
∞
X
=μ+ Ψk εt−k
k=0
Ψ0 = In
lim Ψk = 0
k→∞
Note that
∞
X
Π(L)−1 = Ψ(L) = Ψk Lk
k=0
The Wold coefficients Ψk may be determined from
the VAR coefficients Πk by solving
Π(L)Ψ(L) = In
which implies
Ψ1 = Π1
Ψ2 = Π1Π1 + Π2
..
Ψs = Π1Ψs−1 + · · · + ΠpΨs−p
Since Π(L)−1 = Ψ(L), the long-run variance for Yt
has the form
LRVVAR(Yt) = Ψ(1)ΣΨ(1)0
= Π(1)−1ΣΠ(1)−10
= (In − Π1 − ... − Πp)−1 Σ (In − Π1 − ... − Πp)−10
Estimation
yi = Zπ i + ei, i = 1, . . . , n
• k = np + 1
Yt = μ + εt + Ψ1εt−1 + Ψ2εt−2 + · · ·
Yt+h = μ + εt+h + Ψ1εt+h−1 + · · ·
+Ψh−1εt+1 + Ψhεt + · · ·
εt ∼ WN(0, Σ)
Note that
E[Yt] = μ
var(Yt) = E[(Yt − μ)(Yt − μ)0]
⎡ ⎛ ⎞⎛ ⎞0⎤
∞ ∞
⎢⎝ X X
= E⎣ Ψk εt−k ⎠ ⎝ Ψk εt−k ⎠ ⎥
⎦
k=1 k=1
X∞
= Ψk ΣΨ0k
k=1
The minimum MSE linear forecast of Yt+h based on
It is
MSE(εt+h|t) = E[εt+h|tε0t+h|t]
= Σ + Ψ1ΣΨ01 + · · · + Ψh−1ΣΨ0h−1
h−1
X
= ΨsΣΨ0s
s=1
Chain-rule of Forecasting
ξt+1|t = Fξt
ξt+2|t = Fξt+1|t = F2ξt
..
ξt+h|t = Fξt+h−1|t = Fhξt