You are on page 1of 60

Local Level Model

Siem Jan Koopman

http://sjkoopman.net
s.j.koopman@vu.nl
Department of Econometrics, Vrije Universiteit Amsterdam
CREATES, Aarhus University

February 2017, Week 1


Local Level Model

Program :
Introduction
Local level model
Statistical dynamic properties
Filtering, smoothing and forecasting.
Signal plus noise models
Literature : J. Durbin and S.J. Koopman (2012), Time
Series Analysis by State Space Methods, Second Edition,
Oxford: Oxford University Press. Chapter 2.
Exercises and Assignments.

2 / 59
Time Series
A time series is a set of observations yt , each one recorded at a
specific time t.
The observations are ordered over time.
We assume to have n observations, t = 1, . . . , n.
Examples of time series are:
Number of cars sold each year
Gross Domestic Product of a country
Stock prices during one day
Number of firm defaults
Our purpose is to identify and to model the serial or dynamic
correlation structure in the time series.
Time series analysis is relevant for a wide variety of tasks including
economic policy, financial decision making and forecasting
3 / 59
The US Economy
2
GDP growth
1 10

0
0
-1 Inflation

1960 1980 2000 1960 1980 2000

15
10
Industrial Prod growth
10

-10 Interest Rate 10 year TBill


1920 1940 1960 1980 2000 1970 1980 1990 2000 2010

4 / 59
US Inflation, based on CPI, all products
2
1
1

0
0

-1
-1

1960 1980 2000 1990 2000 2010


1.00 1.0

0.75 0.5

0.50 0.0

0.25 -0.5

0 10 20 30 40 0 10 20 30 40

5 / 59
US Gross Domestic Product (GDP), percentage growth
20

10

1950 1960 1970 1980 1990 2000 2010


1

0 5 10 15 20 25 30 35 40
Spectral density
0.4

0.2

0.0 0.5 1.0

6 / 59
US Industrial Production, levels and growth

100

75

50

25

1920 1930 1940 1950 1960 1970 1980 1990 2000 2010

10

-10
1920 1930 1940 1950 1960 1970 1980 1990 2000 2010

7 / 59
US Industrial Production, growth
1.0

0.5 Autocorrelogram for IP growth 1919-2015

0.0

-0.5

0 5 10 15 20 25 30 35 40
1.0

Autocorrelogram for IP growth 1985-2015


0.5

0.0

-0.5

0 5 10 15 20 25 30 35 40

8 / 59
US Treasury Bill Rate, 10 years

14

12

10

1965 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015

9 / 59
Sources of time series data

Data sources :
US economics :
http://research.stlouisfed.org/fred2/
DK book data :
http://www.ssfpack.com/files/DK-data.zip
Financial data : Datastream, Wharton, Yahoo Finance
Time Series Data Library of Rob Hyndman :
http://datamarket.com/data/list/?q=provider:tsdl

10 / 59
Example: Nile data, yearly volumes
1400

1300

1200

1100

1000

900

800

700

600

500

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970

11 / 59
Time Series
A time series for a single entity is typically denoted by

y1 , . . . , yn yt , t = 1, . . . , n,

where t is the time index and n is time series length.


The current value is yt .
The first lagged value, or first lag, is yt1 .
The th lagged value, or -th lag, is yt for = 1, 2, 3, . . ..
The change between period t 1 and period t is yt yt1 .
This is called the first difference denoted by yt = yt yt1 .
In economic time series, we often take the first difference of the
logarithm, or the log-difference, that is

log yt = log yt log yt1 = log(yt /yt1 ),

is a proxy of proportional change, see Appendix.


Percentage change is then 100 log yt .
12 / 59
Time Series Models: many
Autoregressive models, unit roots
Autoregressive moving average models
Long memory models, fractional integration
... unobserved components time series models ...
Dynamic regression models, error correction models
Vector autoregressive models, cointegration, vector error
correction models
... state space models ...
Regime-switching, Markov-switching, treshold autoregression,
smooth transitions models
Generalized autoregressive conditional heteroskedasticity
(GARCH) models
Autoregressive conditional duration models and related models
... stochastic volatility models ...
13 / 59
Autoregressive model: AR(1)

The AR(1) model is given by

yt = + yt1 + t , t N ID(0, 2 ),

with three parameter coefficients , and 2 with 0 < < .


Stationary condition: || < 1.
Statistical dynamic properties:
Mean E(yt ) = / (1 ); in case = 0, E(yt ) = 0;
Variance Var(yt ) = 2 / (1 2 );
Autocovariance lag 1 is Cov(yt , yt1 ) = 2 / (1 2 );
and for lag = 2, 3, 4, . . . is Cov(yt , yt ) = 2 / (1 2 );
Autocorrelation lag = 1, 2, 3, . . . is Corr(yt , yt ) = .

14 / 59
Moving Average model: MA(1)

The MA(1) model is given by

yt = + t1 + t , t N ID(0, 2 ),

with three parameter coefficients , and 2 with 0 < < .


Invertibility condition: || < 1.
Statistical dynamic properties:
Mean E(yt ) = ; in case = 0, E(yt ) = 0;
Variance Var(yt ) = 2 (1 + 2 );
Autocovariance lag 1 is Cov(yt , yt1 ) = 2 ;
... for lag = 2, 3, 4, . . . is Cov(yt , yt ) = 0;
Autocorrelation lag 1 is Corr(yt , yt1 ) = / (1 + 2 ).

15 / 59
Example: Nile in levels and Nile in differences
1.00

1250
0.75

1000
0.50

750
0.25

500
1880 1900 1920 1940 1960 0 5 10 15 20
1.0

250 0.5

0 0.0

-0.5
-250

1880 1900 1920 1940 1960 0 5 10 15 20

16 / 59
Classical Decomposition
A basic model for representing a time series is the additive model

yt = t + t + t + t , t = 1, . . . , n,

also known as the Classical Decomposition:

yt = observation,
t = slowly changing component (trend),
t = periodic component (seasonal),
t = stationary component (cycle, ARMA),
t = irregular component (disturbance).

It is an Unobserved Components time series model, when


the components are modelled as dynamic stochastic processes.

17 / 59
Local Level Model

Component is stochastic or deterministic function of time:


Deterministic, eg: yt = (t) + t with t N ID(0, 2 )
Stochastic, eg: Local Level model:
Local level model :

yt = t + t , t N ID(0, 2 )
t+1 = t + t , t N ID(0, 2 )

The disturbances t , s are independent for all s, t;


The model is incomplete without initial specification for 1 .
The time series processes for t and yt are nonstationary.

18 / 59
Local Level Model

The local level model or random walk plus noise model :

yt = t + t , t N ID(0, 2 )
t+1 = t + t , t N ID(0, 2 )

The level t and irregular t are unobserved;


Parameters 2 and 2 are unknown;
We still need to define 1 ;
Trivial special cases:
2 = 0 = yt N ID(1 , 2 ) (IID constant level);
2 = 0 = yt+1 = yt + t (random walk);
Local Level model is basic illustration of state space model.

19 / 59
Simulated Local Level data
6
2=0.1 2=1
4

-2

-4

-6

0 10 20 30 40 50 60 70 80 90 100

20 / 59
Simulated Local Level data
6
2=1 2=1
4

-2

-4

-6

0 10 20 30 40 50 60 70 80 90 100

21 / 59
Simulated Local Level data
6
2=1 2=0.1
4

-2

-4

-6

0 10 20 30 40 50 60 70 80 90 100

22 / 59
Simulated Local Level data
5 2=0.1 2=1

-5

0 10 20 30 40 50 60 70 80 90 100

5 2=1 2=1

-5
0 10 20 30 40 50 60 70 80 90 100

2 2=1 2=0.1

-2
0 10 20 30 40 50 60 70 80 90 100

23 / 59
Properties of Local Level model

yt = t + t , t N ID(0, 2 ),
t+1 = t + t , t N ID(0, 2 ),

First difference is stationary:

yt = t + t = t1 + t t1 .

Dynamic properties of yt :

E(yt ) = 0,
0 = E(yt yt ) = 2 + 22 ,
1 = E(yt yt1 ) = 2 ,
= E(yt yt ) = 0 for 2.

24 / 59
Properties of Local Level model
Define q as the signal-to-noise ratio : q = 2 / 2
The theoretical ACF of yt is
2 1
1 = = ,
2 + 22 q+2
= 0, 2.
It implies that
1/2 1 0
The local level model implies that yt MA(1).
Hence yt is ARIMA(0, 1, 1). We have
yt = t + t1 , t N ID(0, 2 ).
This implied MA(1) has ACF 1 = / (1 + 2 ), and hence a
restricted parameter space for : 1 < < 0.
To express as function of q, solve equality for 1 s:
1 p 2 
= q + 4q 2 q .
2
25 / 59
Local Level Model
The Local Level model is given by
yt = t + t , t+1 = t + t , t = 1, . . . , n.
The parameters 2 and 2 are unknown and need to be
estimated, typically via maximum likelihood estimation;
MLE for this class of models is discussed in Week 2.
When we treat parameters 2 and 2 as known, how to
estimate the unobserved series 1 , . . . , n ?
This estimation is referred to as signal extraction.
We base this estimation on conditional expectations.
Signal extraction is the recursive evaluation of conditional
means and variances of the unobserved t for t = 1, . . . , n.
It is known as the Kalman filter;
Below we provide the derivation only for the Local Level
model.
In Week 2 we discuss the Kalman filter for the general linear
state space model.
26 / 59
Normal density

Consider a random variable x that is normally distributed

x N (x , x2 ),

The logdensity function for x is given by


1 1 1
log f (x) = log 2 log x2 Qx , Qx = (x x )2 / x2 .
2 2 2
The density function for x is given by
1
f (x) = c exp( Qx ), c 1 = x 2.
2

27 / 59
Bivariate normal distribution

Consider two random variable x and y that are normally distributed

x N (x , x2 ), y N (y , y2 ), Cov(x, y ) = xy ,

where we let xy = x y with 1 1.


Results and properties can be easily derived when we define
p
y = y + y zy , x = x + x [zy + 1 2 zx ],

where zx , zy N (0, 1) are independently distributed such that


 
1 1 2 2
f (zx , zy ) = exp (zx + zy ) .
2 2

28 / 59
Signal extraction: conditional expectation
Consider two random variable x and y that are normally distributed
x N (x , x2 ), y N (y , y2 ), Cov(x, y ) = xy .
Assume that we do not know anything about x but we have
collected an observation for y .
The conditional expectation and variance are given by
E(x|y ) = x + xy (y y ) / y2 , Var(x|y ) = x2 xy
2
/ y2 .
Verify these results and make sure you can derive these results
from basic principles. We have
2
x|y f (x|y ) N (x|y , x|y ),
2 Var(x|y ).
where x|y E(x|y ) and x|y
2 is not.
Notice that x|y is a function of y but x|y
Notice that when xy = 0, E(x|y ) = x and Var(x|y ) = x2 .
29 / 59
Conditional expectation, derivation for bivariate case

Given that
p
y = y + y zy , x = x + x [zy + 1 2 zx ],

we have
h  p  i
E(x|y ) = E x + x zy + 1 2 zx |y
  
y y p 2
= x + x E + 1 zx |y
y
y y p
= x + x y + 1 2 E(zx |zy )
y2
= x + xy (y y ) / y2 .

30 / 59
Conditional variance, derivation for bivariate case
Given that
p
y = y + y zy , x = x + x [zy + 1 2 zx ],

we have
h  p  i
Var(x|y ) = Var x + x zy + 1 2 zx |y
   
y y p 2
= Var x + x + 1 zx |y
y
h p i
= Var x 1 2 zx |zy
= x2 (1 2 )
= x2 xy
2
/ y2 .

The derivation below for the local level model is based on direct
algebraic arguments such as completing the square.
31 / 59
Local Level Model: signal extraction
Local Level model :
y t = t + t , t N (0, 2 ), t+1 = t + t , t N (0, 2 ).
Assume we have collected observations for y1 , . . . , yt1 and that
the conditional density f (t |y1 , . . . , yt1 ) is normal with known
mean at and known variance pt , we have
t |y1 , . . . , yt1 f (t |y1 , . . . , yt1 ) N (at , pt ).
Next we collect an observation for yt , the conditional densities of
interest are
f (t |y1 , . . . , yt ), f (t+1 |y1 , . . . , yt ).
These conditional densities turn out to be normal as well
f (t |y1 , . . . , yt ) N (at|t , pt|t ), f (t+1 |y1 , . . . , yt ) N (at+1 , pt+1 ).
Can we express (at|t , pt|t ) in terms of (at , pt ) ? Also (at+1 , pt+1 ) ?
32 / 59
Local Level Model: signal extraction
Local Level model :

y t = t + t , t N (0, 2 ), t+1 = t + t , t N (0, 2 ).

Notation: Ys = {y1 , . . . , ys }, for s = t 1, s = t and s = n.


Define prediction error vt = yt at with at = E(t |Yt1 ), with
properties such as

E(vt |Yt1 ) = E(t + t at |Yt1 ) = at at = 0,


Var(vt |Yt1 ) = Var(t at + t |Yt1 ) = pt + 2 ,
E(vt |t , Yt1 ) = t at ,
Var(vt |t , Yt1 ) = 2 ,

We have E(t ) = 0 but verify that E(t |Yt1 ) = 0.


When yt is observed, it becomes fixed, just as y1 , . . . , yt1 .
But also vt is then fixed, it is non-stochastic !!
33 / 59
Local Level Model: signal extraction
Next, we aim to obtain an expression for f (t |y1 , . . . , yt ), with an
eye on updating. Local Level model :
y t = t + t , t N (0, 2 ), t+1 = t + t , t N (0, 2 ).
Consider filtered estimate f (t |y1 , . . . , yt ) f (t |vt , Yt1 ) since
vt = yt at , where at = E(t |Yt1 ), are all fixed. We have
f (t |vt , Yt1 ) = f (t , vt |Yt1 )/f (vt |Yt1 )
= f (t |Yt1 )f (vt |t , Yt1 )/f (vt |Yt1 ),
where f ()s are normals and f (t |Yt ) = const. exp 21 Qt with


Qt = (t at )2 /pt + (vt t + at )2 /2 vt2 /(pt + 2 ).


After some algebra (completing the square), we have
pt + 2 pt vt 2
Qt = 2
t at .
pt pt + 2
34 / 59
Local Level Model: signal extraction
Next we consolidate these results for the Local Level model :

y t = t + t , t N (0, 2 ), t+1 = t + t , t N (0, 2 ).

We are interested in the filtered signal density


1 
f (t |Yt ) = const. exp Qt ,
2
with
pt + 2 pt vt 2
Qt = 2
t at .
pt pt + 2
It implies that
f (t |Yt ) N (at|t , pt|t ),
with
pt
at|t = at + kt vt , pt|t = kt 2 , kt = .
pt + 2
35 / 59
Local Level Model: signal extraction
Local Level model :

y t = t + t , t N (0, 2 ), t+1 = t + t , t N (0, 2 ).

In addition, we are typically interested in the predicted signal


density
f (t+1 |Yt ) N (at+1 , pt+1 ),
where

at+1 = E(t+1 |Yt ) = E(t + t |Yt ) = at|t ,


pt+1 = Var(t + t |Yt ) = pt|t + 2 .

We have obtained the updating equations


pt
at+1 = at + kt vt , pt+1 = kt 2 + 2 , kt = .
pt + 2
This is the celebrated Kalman filter for the Local Level model.
36 / 59
Kalman filter for the Local Level Model
Local Level model :

y t = t + t , t N (0, 2 ), t+1 = t + t , t N (0, 2 ).

The Kalman filter equations are given by

vt = yt at , Var(vt ) = pt + 2 ,
kt = pt / (pt + 2 ),
at|t = at + kt vt ,
pt|t = kt 2 ,
at+1 = at|t ,
pt+1 = pt|t + 2 ,

for t = 1, . . . , T with initialisation ...

37 / 59
Kalman filter for the Local Level Model
Local Level model :

y t = t + t , t N (0, 2 ), t+1 = t + t , t N (0, 2 ).

The Kalman filter equations are given by

vt = yt at , Var(vt ) = pt + 2 ,
kt = pt / (pt + 2 ),
at|t = at + kt vt ,
pt|t = kt 2 ,
at+1 = at|t ,
pt+1 = pt|t + 2 ,

for t = 1, . . . , T with initialisation ... a1 = 0 and p1 = 2 107 .


The equations are recursions, we update when new yt is observed.
37 / 59
Signal Extraction for Nile Data: filtered estimate of level
1400

1300

1200

1100

1000

900

800

700

600

500

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970

38 / 59
Observation weights
We show next that at|t is a weighted sum of past observations :

at|t = at + kt vt = at + kt (yt at )
= kt yt + (1 kt )at [with at = at1 + kt1 vt1 ]
= kt yt + (1 kt )at1 + (1 kt )kt1 (yt1 at1 )
= kt yt + kt1 (1 kt )yt1 + (1 kt )(1 kt1 )at1
..
.
t1
X j1
Y
= kt yt + wt,j ytj , wt,j = ktj (1 ktm ).
j=1 m=0

Since 0 < kt < 1, the weights are decaying in j.


A larger j implies that ytj is more remote from todays yt .

39 / 59
Signal Extraction for Nile Data: observation weights
0.30

0.25 Local Level : filtered weights


0.20

0.15

0.10

0.05

0.00
-20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0
0.05

0.04
Global Level
0.03

0.02

0.01

0.00
-20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0

40 / 59
Signal plus noise model
The local level model belongs to a class of unobserved components
time series models.
The local level model can also be interpreted as a Signal plus noise
model, that is
y t = t + t ,
where t is the unobserved signal and, in the case of the local level
model, modelled as a random walk process : t = t and
t+1 = t + t .
The signal can also be modelled as another dynamic stochastic
process.:
non-stationary trend processes
stationary autoregressive processes
sum of components
any linear dynamic process
41 / 59
Local Linear Trend Model
The LLT model extends the LL model with a slope:

yt = t + t , t N ID(0, 2 ),
t+1 = t + t + t , t N ID(0, 2 ),
t+1 = t + t , t N ID(0, 2 ).

All disturbances are independent at all lags and leads;


Initial distributions 1 , 1 need to specified;
If 2 = 0 the trend is a random walk with constant drift 1 ;
(For 1 = 0 the model reduces to a LL model.)
If additionally 2 = 0 the trend is a straight line with slope 1
and intercept 1 ;
If 2 > 0 but 2 = 0, the trend is a smooth curve, or an
Integrated Random Walk;
42 / 59
Trend and Slope in LLT Model


5.0

2.5

0.0

-2.5
0 10 20 30 40 50 60 70 80 90 100

0.75

0.50

0.25

0.00

-0.25

0 10 20 30 40 50 60 70 80 90 100

43 / 59
Trend and Slope in Integrated Random Walk Model

10

0 10 20 30 40 50 60 70 80 90 100

0.75

0.50

0.25

0.00

-0.25

0 10 20 30 40 50 60 70 80 90 100

44 / 59
Local Linear Trend Model

The LLT model can be represented as the ARIMA(0,2,2)


model, please verify this;
The estimation methodology is the same as for the LL model;
It requires the general state space methods;
LLT provides a model for Holt-Winters forecasting;
The smooth trend model is with 2 = 0;
Smoother trend models can be obtained by higher-order
Random Walk processes:

d t = t

and with yt = t + t .

45 / 59
Seasonal Effects

We have seen specifications for t in the basic model

yt = t + t + t .

Now we will consider the seasonal term t . Let s denote the


number of seasons in the data:
s = 12 for monthly data,
s = 4 for quarterly data,
s = 7 for daily data when modelling a weekly pattern.

46 / 59
Dummy Seasonal

The simplest way to model seasonal effects is by using dummy


variables. The effect summed over the seasons should equal zero:
s1
X
t+1 = t+1j .
j=1

To allow the pattern to change over time, we introduce a new


disturbance term:
s1
X
t+1 = t+1j + t , t N ID(0, 2 ).
j=1

The expectation of the sum of the seasonal effects is zero.

47 / 59
Trigonometric Seasonal
Defining jt as the effect of season j at time t, an alternative
specification for the seasonal pattern is
[s/2]
X
t = jt ,
j=1

j,t+1 = jt cos j + jt sin j + jt ,



j,t+1 = jt sin j + jt cos j + jt ,
jt , jt N ID(0, 2 ), j = 2j/s.

Without the disturbance, the trigonometric specification is


identical to the deterministic dummy specification.
The autocorrelation in the trigonometric specification lasts
through more lags: changes occur in a smoother way;
48 / 59
Seatbelt Law
7.9

7.8

7.7

7.6

7.5

7.4

7.3

7.2

7.1

7.0

70 75 80 85

49 / 59
Seatbelt Law: decomposition

7.75

7.25
drivers Level+Reg

70 75 80 85
Seasonal

0.2

0.0

70 75 80 85

Irregular
0.1

0.0

-0.1

70 75 80 85

50 / 59
Seatbelt Law: forecasting
7.9

7.8

7.7

7.6

7.5

7.4

7.3

7.2

7.1

7.0

70 75 80 85

51 / 59
Textbooks

A.C.Harvey (1989). Forecasting, Structural Time Series


Models and the Kalman Filter. Cambridge University Press
G.Kitagawa & W.Gersch (1996). Smoothness Priors Analysis
of Time Series. Springer-Verlag
J.Harrison & M.West (1997). Bayesian Forecasting and
Dynamic Models. Springer-Verlag
J.Durbin & S.J.Koopman (2012). Time Series Analysis by
State Space Methods, Second Edition. Oxford University
Press
J.J.F.Commandeur & S.J.Koopman (2007). An Introduction
to State Space Time Series Analysis. Oxford University Press

52 / 59
Exercises

1 Consider the Local Level model (see slides, see DK chapter 2).

(a) Reduced form is ARIMA(0,1,1) process. Derive the


relationship between signal-to-noise ratio q of LL model and
the coefficient of the ARIMA model;

(b) Derive the reduced form in the case t = qt and notice the
difference in the general case.
(c) Give the elements of the mean vector and variance matrix of
y = (y1 , . . . , yn )0 when yt is generated by a LL model for
t = 1, . . . , n.

53 / 59
Exercises

2 (a) Provide a detailed derivation (proof) of the Kalman filter for


the Local Level model.
(b) Discuss the initialisation of the Kalman filter for the Local
Level model.
(c) What are the consequences of the Kalman filter when we let
p1 ?

54 / 59
Exercises
3 Consider the stationary time series model

yt = t + t , t N (0, 2 ),
t+1 = t + t , t N (0, 2 ),

with autoregressive coefficient || < 1 and variances 2 > 0


and 2 > 0. The disturbances t and s are independent of
each other for all t, s = 1, . . . , n.

(a) Explore the dynamic properties of yt (mean, variance,


autocovariances, autocorrelations).
(b) Assume that parameters , 2 and 2 are given. Develop the
Kalman filter recursions for this model.
(c) Propose initial values for mean and variance of the
autoregressive component t , that is, 1 N (a1 , p1 ) and
propose values for a1 and p1 .
55 / 59
Assignment 1

1 Consider the Local Level model (see slides, see DK chapter 2).
(a) Implement the Kalman filter for the Local Level model in a
computer program with initial conditions a1 = 0 and p1 = 107
and for variances 2 = 1 and 2 = q with q = 0.1.
(b) Replicate Figure on page 40 of the slides using the initial and
variance values as provided above.
(c) Present 4 different weight functions (top Figure on page 40)
for q = 10, q = 1, q = 0.1 and q = 0.001.
(d) Apply the Kalman filter to the Nile data with 2 = 15099 and
2 = 1469.1
(Nile data is part of DK book data, see page 10 of these slides)
(e) Replicate Figure on page 38 of the slides.
(f) Repeat this assignment by initialising the Kalman filter at time
t = 2 with a2 = y1 and p2 = 2 + 2 . Do the numerical
results change very much ? Explain.

56 / 59
Appendix Taylor series
The Taylor expansion for function f (x) around some value x is
1
f (x) = f (x = x )+f 0 (x = x )[x x ]+ f (x = x )[x x ]2 +. . . ,
2
where
f (x) 2 f (x)
f 0 (x) = , f (x) = ,
x xx
and g (x = x ) means that we evaluate function g (x) at x = x .
Example: consider f (x) = log(1 + x) with f 0 (x) = (1 + x)1 and
f (x) = (1 + x)2 ; the expansion of f (x) around x = 0 is
1 1
log(1 + x) = 0 + 1 (x 0) + (1) (x 0)2 + . . . = x x 2 + . . .
2 2
0
Notice that f (x = 0) = 0, f (x = 0) = 1 and f (x = 0) = 1. For
small enough x (when x is close to x = 0), we have
log(1 + x) x.
Check: log(1.01) = .00995 0.01 and log(1.1) = 0.0953 0.1.
57 / 59
Appendix Percentage growth
Observation at time t is yt and observation at time t 1 is yt1 .
We define rate rt as the proportional change of yt wrt yt1 , that is
yt yt1
rt = yt yt1 = yt1 rt yt = yt1 (1 + rt ).
yt1
We notice that rt can be positive and negative !
When we take logs of yt = yt1 (1 + rt ), we obtain

log yt = log yt1 + log(1 + rt ) log yt log yt1 = log(1 + rt )

log yt = log(1 + rt ).
Since log(1 + rt ) rt , see previous slide, when rt is small, we have

rt log yt .

The percentage growth is defined as 100 rt 100 log yt .


58 / 59
Appendix Lag operators and polynomials

Lag operator Lyt = yt1 , L yt = yt , for = 1, 2, 3, . . .


Difference operator yt = (1 L)yt = yt yt1
Autoregressive polynomial (L)yt = (1 L)yt = yt yt1
Other polynomial (L)t = (1 + L)t = t + t1
Second difference
2 yt = (yt ) = (yt yt1 ) = yt 2yt1 + yt2
Seasonal difference s yt = yt yts for typical
s = 2, 4, 7, 12, 52
Seasonal sum operator
S(L)yt = (1 + L + L2 + . . . + Ls1 )yt = yt + yt1 + . . . + yts+1
Show that S(L) = s .

59 / 59

You might also like