You are on page 1of 4

Bocconi University academic year 2014-2015

20236 TIME SERIES ANALYSIS OF ECONOMIC-FINANCIAL DATA


Prof. Sonia Petrone
Tutor: Marta Crispino & Lorenzo Cappello

Course contents
May 2015
General notions.
Aims of time series analysis.
What is a time series? Kinds of data (quantitative, qualitative time series; regular
irregular frequency; high frequency; univariate, multivariate).
(Chatfield, Ch 1)
Part I. Introduction to time series analysis: descriptive techniques.
Classical time series decomposition. (Chatfield, Ch 2, Sections 2.1-2.6. Subsection 2.5.2:
only basic moving average. Section 2.7 will be useful later.)
Forecasting. Simple exponential smoothing. Holt and Holt-Winters forecasting procedures (exponential smoothing with trend and seasonality: only the general idea).
Some references: Chatfield (2004), Ch.5, sect. 5.2.2; Petris, Petrone, Campagnoli
(2009), Chapter 3, section 3.3.1).
Part II. Probabilistic models for time series analysis.
Time series as a discrete time stochastic process.
Mean and variance function. Autocovariance and autocorrelation functions.
Strong and weak stationarity.
Estimation of the autocorrelation and partial autocorrelation functions. The correlogram.
Examples: White noise. Gaussian processes.
Random walk.
(reference: Chatfield, Ch. 3, Sections 3.1-3.4)
Markov processes. Inference for Markov chains and examples
(reference: Lecture notes.).
Stationary time series: ARMA models.
Autoregressive processes.
Moving average processes.
ARMA models.
Basic properties of ARMA processes.
Inference for ARMA models (basic notions for AR processes. How inference is treated
exploiting the connection with DLMs).
Box-Jenkins strategy to fit ARMA models.
Model checking (residual analysis; forecasting performance, etc. ).
A basic reference is Chatfield (2004), Ch.3 (pp 39-49), Ch. 4, Ch 5, section 5.2.4).
1

Part III. Inference for non stationary time series: state-space models
Preliminary notions: elements of Bayesian inference.
Bayesian inference for the mean of a Gaussian population with known variance. Predictive
distribution. (Petris, Petrone, Campagnoli (2009), Chapter 1: the Gaussian example, with
known variance, is on page 7).
State-space models for time series analysis.
(reference: Petris, Petrone, Campagnoli, Ch 2)
Main assumptions.
Examples:
Random walk plus noise.
Hidden Markov models.
Dynamic linear models (DLMs).
Examples of non-linear, non-gaussian models.
Basic problems: filtering, smoothing, forecasting
Filtering for DLMs: Kalman filter.
Examples: Random walk plus noise. Local trend model.
Forecasting.
One-step-ahead and k-steps ahead forecasting.
The innovation process. Model checking.
Smoothing: the Kalman smoother.
(Petris, Petrone, Campagnoli (2009), Chapter 3.)
Maximum likelihood estimation of unknown parameters. (Petris, Petrone, Campagnoli (2009), Chapter 4, section 4.1).
Model specification: DLMs for time series analysis.
Combining dynamic linear models.
Structural time series DLMs:
Trend models.
Random walk plus noise. Derivation of its forecast function and main properties.
Local trend (or linear growth) and main properties.
DLMs for seasonality (Petris, Petrone, Campagnoli, Chapter 3; only local seasonal factors DLMs; no Fourier-based DLMs).
DLMs for dynamic regression.
Example: dynamic CAPM models.
DLM representations of ARMA models (Chapter 3. Details only for the AR(2) model).
Notion of conditionally Gaussian state space models.
Example: estimating the output gap.
DLMs for multivariate time series.
Vector autoregressive models as DLMs (only the general idea).
2

Seemingly unrelated regressions (SUR)


Seemingly unrelated equation models (SUTSE)
Factor models (only the general idea).
Bayesian inference for unknown parameters in a DLM.
Basic notions of Bayesian inference vs MLE for unknown parameters in a DLM.
(part of Petris, Petrone, Campagnoli, Chapter 4).
Bayesian inference by stochastic simulation only the general idea of Markov Chain Monte
Carlo (MCMC).
(further reading, not mandatory! : Petris, Petrone, Campagnoli, Chapter 1 and Chapter 4).

TEXTBOOKS:
PART I Classical time series analysis.
Chatfield, C. (2004). The Analysis of Time Series (6th edition). Chapman and HallCRC.
A classical, popular textbook, simple but quite clear and generally well written. We
mainly study Chapters 1-5.
PART II: State space models
Lecture notes: Introduction to Markov chains posted on learning space.
Petris, G, Petrone, S. and Campagnoli, P. (2009). Dynamic linear models with R. Springer,
N.Y.
These are the course lecture notes (and more). Chapters 1-2-3 and part of Chapter 4.
Further references (NOT exhaustive, and NOT textbooks for this course):
Makridakis, Wheelwright and Hyndman (1998). Forecasting: methods and applications. 3rd
Edition, Wiley.
This is a simple book, but Chapter 3 is a reasonable reference on classical time series decomposition.
Tsay, R.S. (2010). Analysis of Financial Time Series, Third Edition. Wiley-Interscience.
Hamilton (1994). Time Series Analysis. Princeton University Press.
Durbin and Koopman (2001). Time Series Analysis by State Space Methods. Oxford University Press.
Further reading about Kalman Filtering and Smoothing.
Petris, G. and Petrone, S. (2011). State Space Models in R. Journal of Statistical Software,
41, http://www.jstatsoft.org/v41/i04
This special issue of the Journal of Statistical Software, http://www.jstatsoft.org/v41 ,
presents implementations of state-space models in different software frameworks.

ASSIGNMENTS AND EXAM.


Assignments: The assignments during the course are not compulsory. They can be done
individually or in groups. The aim of the assignments is to encourage you to make exercise, try
your own data analysis, follow the lectures, ask questions if there is anything unclear.
These assignments are not formally evaluated for the final exam, but students who deliver the
periodic assignments can skip the questions on DLMs with R in the written proof.
The final assignment (or final project) will consist on a guided analysis of a problem with
real data. This is mandatory and will count for 20% of the final grade (see below).

Exam.
Final project: it can be done individually or in groups. The final assignment is due by the
day of the written exam (if the components of a group take the written proof in different dates, it
has to be delivered on the fist date). The final project will count 20% of the final grade.
Written exam: The written proof includes general questions on the course contents, and
questions on the implementation of time series analysis with R; the latter are not compulsory for
the students who delivered the periodic assignments.
The written proof counts 80% of the final grade. Because the final project is not necessarily done
individually, the written proof may count 100% if badly done.
If a student fails in the written exam, the final project will be kept.
The 80% and 20% weights may slightly change according to the global evaluation made by the
teacher, which takes into account the work done during the course, the assignments etc.
Some non exhaustive examples of possible questions for the written exam - more on
learning space
A time series is a (discrete time) stochastic process. Explain.
Describe simple exponential smoothing techniques for prediction.
When a time series (Yt ) is weakly (strongly) stationary? When and how can you estimate the
autocorrelation function?
When (Yt ) is a Markov chain and questions on properties.
Define an AR(p) model.
Define an AR(1) process and discuss its main properties.
Define a general state space model and provide some examples...
Explain, giving at least an idea of the proofs, the Kalman filter for DLMs.
Compute the forecast function for the random walk plus noise model and give the expression of
credibility intervals for the h-steps ahead prediction. Comment briefly.
Explain SUTSE models.
Compare MLE and Bayesian estimation of unknown parameters of a DLM

You might also like