You are on page 1of 245

STOCHASTIC PROCESSES AND APPLICATIONS

G.A. Pavliotis
Department of Mathematics
Imperial College London
London SW7 2AZ, UK
February 23, 2011
2
Contents
Preface vii
1 Introduction 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Historical Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 The One-Dimensional Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Stochastic Modeling of Deterministic Chaos . . . . . . . . . . . . . . . . . . . . . 6
1.5 Why Randomness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Elements of Probability Theory 9
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Basic Denitions from Probability Theory . . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1 Expectation of Random Variables . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Conditional Expecation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 The Characteristic Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6 Gaussian Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.7 Types of Convergence and Limit Theorems . . . . . . . . . . . . . . . . . . . . . 23
2.8 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
i
3 Basics of the Theory of Stochastic Processes 29
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Denition of a Stochastic Process . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Stationary Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.1 Strictly Stationary Processes . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.2 Second Order Stationary Processes . . . . . . . . . . . . . . . . . . . . . 32
3.3.3 Ergodic Properties of Second-Order Stationary Processes . . . . . . . . . . 37
3.4 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5 Other Examples of Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . 44
3.5.1 Brownian Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.5.2 Fractional Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.5.3 The Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.6 The Karhunen-Lo eve Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.7 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4 Markov Processes 57
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3 Denition of a Markov Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.4 The Chapman-Kolmogorov Equation . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.5 The Generator of a Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . 67
4.5.1 The Adjoint Semigroup . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.6 Ergodic Markov processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.6.1 Stationary Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.7 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5 Diffusion Processes 77
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2 Denition of a Diffusion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.3 The Backward and Forward Kolmogorov Equations . . . . . . . . . . . . . . . . . 79
ii
5.3.1 The Backward Kolmogorov Equation . . . . . . . . . . . . . . . . . . . . 79
5.3.2 The Forward Kolmogorov Equation . . . . . . . . . . . . . . . . . . . . . 81
5.4 Multidimensional Diffusion Processes . . . . . . . . . . . . . . . . . . . . . . . . 84
5.5 Connection with Stochastic Differential Equations . . . . . . . . . . . . . . . . . . 84
5.6 Examples of Diffusion Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.7 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6 The Fokker-Planck Equation 87
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2 Basic Properties of the FP Equation . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2.1 Existence and Uniqueness of Solutions . . . . . . . . . . . . . . . . . . . 88
6.2.2 The FP equation as a conservation law . . . . . . . . . . . . . . . . . . . . 89
6.2.3 Boundary conditions for the FokkerPlanck equation . . . . . . . . . . . . 90
6.3 Examples of Diffusion Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.3.1 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.3.2 The Ornstein-Uhlenbeck Process . . . . . . . . . . . . . . . . . . . . . . . 95
6.3.3 The Geometric Brownian Motion . . . . . . . . . . . . . . . . . . . . . . 99
6.4 The Ornstein-Uhlenbeck Process and Hermite Polynomials . . . . . . . . . . . . . 100
6.5 Reversible Diffusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.5.1 Markov Chain Monte Carlo (MCMC) . . . . . . . . . . . . . . . . . . . . 111
6.6 Perturbations of non-Reversible Diffusions . . . . . . . . . . . . . . . . . . . . . . 112
6.7 Eigenfunction Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.7.1 Reduction to a Schr odinger Equation . . . . . . . . . . . . . . . . . . . . 114
6.8 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7 Stochastic Differential Equations 119
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.2 The It o and Stratonovich Stochastic Integral . . . . . . . . . . . . . . . . . . . . . 119
7.2.1 The Stratonovich Stochastic Integral . . . . . . . . . . . . . . . . . . . . . 121
7.3 Stochastic Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
iii
7.3.1 Examples of SDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.4 The Generator, It os formula and the Fokker-Planck Equation . . . . . . . . . . . . 125
7.4.1 The Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.4.2 It os Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.5 Linear SDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.6 Derivation of the Stratonovich SDE . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.6.1 It o versus Stratonovich . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.7 Numerical Solution of SDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.8 Parameter Estimation for SDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.9 Noise Induced Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.10 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8 The Langevin Equation 137
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.2 The Fokker-Planck Equation in Phase Space (Klein-Kramers Equation) . . . . . . 137
8.3 The Langevin Equation in a Harmonic Potential . . . . . . . . . . . . . . . . . . . 142
8.4 Asymptotic Limits for the Langevin Equation . . . . . . . . . . . . . . . . . . . . 151
8.4.1 The Overdamped Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
8.4.2 The Underdamped Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
8.5 Brownian Motion in Periodic Potentials . . . . . . . . . . . . . . . . . . . . . . . 164
8.5.1 The Langevin equation in a periodic potential . . . . . . . . . . . . . . . . 164
8.5.2 Equivalence With the Green-Kubo Formula . . . . . . . . . . . . . . . . . 170
8.6 The Underdamped and Overdamped Limits of the Diffusion Coefcient . . . . . . 171
8.6.1 Brownian Motion in a Tilted Periodic Potential . . . . . . . . . . . . . . . 180
8.7 Numerical Solution of the Klein-Kramers Equation . . . . . . . . . . . . . . . . . 183
8.8 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
9 Exit Time Problems 185
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
9.2 Brownian Motion in a Bistable Potential . . . . . . . . . . . . . . . . . . . . . . . 185
iv
9.3 The Mean First Passage Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.3.1 The Boundary Value Problem for the MFPT . . . . . . . . . . . . . . . . . 188
9.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
9.4 Escape from a Potential Barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.4.1 Calculation of the Reaction Rate in the Overdamped Regime . . . . . . . . 193
9.4.2 The Intermediate Regime: = O(1) . . . . . . . . . . . . . . . . . . . . . 194
9.4.3 Calculation of the Reaction Rate in the energy-diffusion-limited regime . . 195
9.5 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
10 Stochastic Resonance and Brownian Motors 199
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
10.2 Stochastic Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
10.3 Brownian Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
10.4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
10.5 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
10.6 Multiscale Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
10.6.1 Calculation of the Effective Drift . . . . . . . . . . . . . . . . . . . . . . . 203
10.6.2 Calculation of the Effective Diffusion Coefcient . . . . . . . . . . . . . . 205
10.7 Effective Diffusion Coefcient for Correlation Ratchets . . . . . . . . . . . . . . . 207
10.8 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
10.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
11 Stochastic Processes and Statistical Mechanics 213
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
11.2 The Kac-Zwanzig Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
11.3 Quasi-Markovian Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . 218
11.3.1 Open Classical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
11.4 The Mori-Zwanzig Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
11.5 Derivation of the Fokker-Planck and Langevin Equations . . . . . . . . . . . . . . 224
11.6 Linear Response Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
11.7 Discussion and Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
v
11.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
vi
Preface
The purpose of these notes is to present various results and techniques from the theory of stochastic
processes and are useful in the study of stochastic problems in physics, chemistry and other areas.
These notes have been used for several years for a course on applied stochastic processes offered to
fourth year and to MSc students in applied mathematics at the department of mathematics, Imperial
College London.
G.A. Pavliotis
London, December 2010
vii
viii
Chapter 1
Introduction
1.1 Introduction
In this chapter we introduce some of the concepts and techniques that we will study in this book.
In Section 1.2 we present a brief historical overview on the development of the theory of stochastic
processes in the twentieth century. In Section 1.3 we introduce the one-dimensional random walk
an we use this example in order to introduce several concepts such Brownian motion, the Markov
property. In Section 1.4 we discuss about the stochastic modeling of deterministic chaos. Some
comments on the role of probabilistic modeling in the physical sciences are offered in Section 1.5.
Discussion and bibliographical comments are presented in Section 1.6. Exercises are included in
Section 1.7.
1.2 Historical Overview
The theory of stochastic processes, at least in terms of its application to physics, started with
Einsteins work on the theory of Brownian motion: Concerning the motion, as required by the
molecular-kinetic theory of heat, of particles suspended in liquids at rest (1905) and in a series
of additional papers that were published in the period 1905 1906. In these fundamental works,
Einstein presented an explanation of Browns observation (1827) that when suspended in water,
small pollen grains are found to be in a very animated and irregular state of motion. In develop-
ing his theory Einstein introduced several concepts that still play a fundamental role in the study
of stochastic processes and that we will study in this book. Using modern terminology, Einstein
introduced a Markov chain model for the motion of the particle (molecule, pollen grain...). Fur-
1
thermore, he introduced the idea that it makes more sense to talk about the probability of nding
the particle at position x at time t, rather than about individual trajectories.
In his work many of the main aspects of the modern theory of stochastic processes can be
found:
The assumption of Markovianity (no memory) expressed through the Chapman-Kolmogorov
equation.
The FokkerPlanck equation (in this case, the diffusion equation).
The derivation of the Fokker-Planck equation fromthe master (Chapman-Kolmogorov) equa-
tion through a Kramers-Moyal expansion.
The calculation of a transport coefcient (the diffusion equation) using macroscopic (kinetic
theory-based) considerations:
D =
k
B
T
6a
.
k
B
is Boltzmanns constant, T is the temperature, is the viscosity of the uid and a is the
diameter of the particle.
Einsteins theory is based on the Fokker-Planck equation. Langevin (1908) developed a theory
based on a stochastic differential equation. The equation of motion for a Brownian particle is
m
d
2
x
dt
2
= 6a
dx
dt
+ ,
where is a random force. It can be shown that there is complete agreement between Einsteins
theory and Langevins theory. The theory of Brownian motion was developed independently by
Smoluchowski, who also performed several experiments.
The approaches of Langevin and Einstein represent the two main approaches in the theory of
stochastic processes:
Study individual trajectories of Brownian particles. Their evolution is governed by a stochas-
tic differential equation:
dX
dt
= F(X) + (X)(t),
where (t) is a random force.
2
Study the probability (x, t) of nding a particle at position x at time t. This probability
distribution satises the Fokker-Planck equation:

t
= (F(x)) +
1
2
: (A(x)),
where A(x) = (x)(x)
T
.
The theory of stochastic processes was developed during the 20th century by several mathemati-
cians and physicists including Smoluchowksi, Planck, Kramers, Chandrasekhar, Wiener, Kol-
mogorov, It o, Doob.
1.3 The One-Dimensional Random Walk
We let time be discrete, i.e. t = 0, 1, . . . . Consider the following stochastic process S
n
: S
0
= 0; at
each time step it moves to 1 with equal probability
1
2
.
In other words, at each time step we ip a fair coin. If the outcome is heads, we move one unit
to the right. If the outcome is tails, we move one unit to the left.
Alternatively, we can think of the random walk as a sum of independent random variables:
S
n
=
n

j=1
X
j
,
where X
j
1, 1 with P(X
j
= 1) =
1
2
.
We can simulate the random walk on a computer:
We need a (pseudo)randomnumber generator to generate n independent randomvariables
which are uniformly distributed in the interval [0,1].
If the value of the random variable is
1
2
then the particle moves to the left, otherwise it
moves to the right.
We then take the sum of all these random moves.
The sequence S
n

N
n=1
indexed by the discrete time T = 1, 2, . . . N is the path of the
random walk. We use a linear interpolation (i.e. connect the points n, S
n
by straight lines)
to generate a continuous path.
3
Figure 1.1: Three paths of the random walk of length N = 50.
Figure 1.2: Three paths of the random walk of length N = 1000.
4
Figure 1.3: Sample Brownian paths.
Every path of the random walk is different: it depends on the outcome of a sequence of indepen-
dent random experiments. We can compute statistics by generating a large number of paths and
computing averages. For example, E(S
n
) = 0, E(S
2
n
) = n. The paths of the random walk (without
the linear interpolation) are not continuous: the random walk has a jump of size 1 at each time step.
This is an example of a discrete time, discrete space stochastic processes. The random walk is a
time-homogeneous Markov process. If we take a large number of steps, the random walk starts
looking like a continuous time process with continuous paths.
We can quantify this observation by introducing an appropriate rescaled process and by taking
an appropriate limit. Consider the sequence of continuous time stochastic processes
Z
n
t
:=
1

n
S
nt
.
In the limit as n , the sequence Z
n
t
converges (in some appropriate sense, that will be
made precise in later chapters) to a Brownian motion with diffusion coefcient D =
x
2
2t
=
1
2
.
Brownian motion W(t) is a continuous time stochastic processes with continuous paths that starts
at 0 (W(0) = 0) and has independent, normally. distributed Gaussian increments. We can simulate
the Brownian motion on a computer using a random number generator that generates normally
distributed, independent random variables. We can write an equation for the evolution of the paths
5
of a Brownian motion X
t
with diffusion coefcient D starting at x:
dX
t
=

2DdW
t
, X
0
= x.
This is the simplest example of a stochastic differential equation. The probability of nding
X
t
at y at time t, given that it was at x at time t = 0, the transition probability density (y, t)
satises the PDE

t
= D

y
2
, (y, 0) = (y x).
This is the simplest example of the Fokker-Planck equation. The connection between Brownian
motion and the diffusion equation was made by Einstein in 1905.
1.4 Stochastic Modeling of Deterministic Chaos
1.5 Why Randomness
Why introduce randomness in the description of physical systems?
To describe outcomes of a repeated set of experiments. Think of tossing a coin repeatedly or
of throwing a dice.
To describe a deterministic system for which we have incomplete information: we have
imprecise knowledge of initial and boundary conditions or of model parameters.
ODEs with random initial conditions are equivalent to stochastic processes that can be
described using stochastic differential equations.
To describe systems for which we are not condent about the validity of our mathematical
model.
To describe a dynamical system exhibiting very complicated behavior (chaotic dynamical
systems). Determinism versus predictability.
To describe a high dimensional deterministic systemusing a simpler, lowdimensional stochas-
tic system. Think of the physical model for Brownian motion (a heavy particle colliding with
many small particles).
6
To describe a system that is inherently random. Think of quantum mechanics.
Stochastic modeling is currently used in many different areas ranging from biology to climate
modeling to economics.
1.6 Discussion and Bibliography
The fundamental papers of Einstein on the theory of Brownian motion have been reprinted by
Dover [20]. The readers of this book are strongly encouraged to study these papers. Other fun-
damental papers from the early period of the development of the theory of stochastic processes
include the papers by Langevin, Ornstein and Uhlenbeck, Doob, Kramers and Chandrashekhars
famous review article [12]. Many of these early papers on the theory of stochastic processes have
been reprinted in [18]. Very useful historical comments can be founds in the books by Nelson [68]
and Mazo [66].
1.7 Exercises
1. Read the papers by Einstein, Ornstein-Uhlenbeck, Doob etc.
2. Write a computer program for generating the random walk in one and two dimensions. Study
numerically the Brownian limit and compute the statistics of the random walk.
7
8
Chapter 2
Elements of Probability Theory
2.1 Introduction
In this chapter we put together some basic denitions and results from probability theory that will
be used later on. In Section 2.2 we give some basic denitions from the theory of probability.
In Section 2.3 we present some properties of random variables. In Section 2.4 we introduce the
concept of conditional expectation and in Section 2.5 we dene the characteristic function, one of
the most useful tools in the study of (sums of) random variables. Some explicit calculations for
the multivariate Gaussian distribution are presented in Section 2.6. Different types of convergence
and the basic limit theorems of the theory of probability are discussed in Section 2.7. Discussion
and bibliographical comments are presented in Section 2.8. Exercises are included in Section 2.9.
2.2 Basic Denitions from Probability Theory
In Chapter 1 we dened a stochastic process as a dynamical system whose law of evolution is
probabilistic. In order to study stochastic processes we need to be able to describe the outcome of
a random experiment and to calculate functions of this outcome. First we need to describe the set
of all possible experiments.
Denition 2.2.1. The set of all possible outcomes of an experiment is called the sample space and
is denoted by .
Example 2.2.2. The possible outcomes of the experiment of tossing a coin are H and T. The
sample space is =
_
H, T
_
.
9
The possible outcomes of the experiment of throwing a die are 1, 2, 3, 4, 5 and 6. The
sample space is =
_
1, 2, 3, 4, 5, 6
_
.
We dene events to be subsets of the sample space. Of course, we would like the unions,
intersections and complements of events to also be events. When the sample space is uncount-
able, then technical difculties arise. In particular, not all subsets of the sample space need to be
events. A denition of the collection of subsets of events which is appropriate for nite additive
probability is the following.
Denition 2.2.3. A collection T of is called a eld on if
i. T;
ii. if A T then A
c
T;
iii. If A, B T then A B T.
From the denition of a eld we immediately deduce that T is closed under nite unions and
nite intersections:
A
1
, . . . A
n
T
n
i=1
A
i
T,
n
i=1
A
i
T.
When is innite dimensional then the above denition is not appropriate since we need to
consider countable unions of events.
Denition 2.2.4. A collection T of is called a -eld or -algebra on if
i. T;
ii. if A T then A
c
T;
iii. If A
1
, A
2
, T then

i=1
A
i
T.
A -algebra is closed under the operation of taking countable intersections.
Example 2.2.5. T =
_
,
_
.
T =
_
, A, A
c
,
_
where A is a subset of .
The power set of , denoted by 0, 1

which contains all subsets of .


10
Let T be a collection of subsets of . It can be extended to a algebra (take for example the
power set of ). Consider all the algebras that contain T and take their intersection, denoted
by (T), i.e. A if and only if it is in every algebra containing T. (T) is a algebra
(see Exercise 1 ). It is the smallest algebra containing T and it is called the algebra generated
by T.
Example 2.2.6. Let = R
n
. The -algebra generated by the open subsets of R
n
(or, equivalently,
by the open balls of R
n
) is called the Borel -algebra of R
n
and is denoted by B(R
n
).
Let X be a closed subset of R
n
. Similarly, we can dene the Borel -algebra of X, denoted by
B(X).
A sub-algebra is a collection of subsets of a algebra which satises the axioms of a
algebra.
The eld T of a sample space contains all possible outcomes of the experiment that we
want to study. Intuitively, the eld contains all the information about the random experiment
that is available to us.
Now we want to assign probabilities to the possible outcomes of an experiment.
Denition 2.2.7. A probability measure P on the measurable space (, T) is a function P :
T [0, 1] satisfying
i. P() = 0, P() = 1;
ii. For A
1
, A
2
, . . . with A
i
A
j
= , i ,= j then
P(

i=1
A
i
) =

i=1
P(A
i
).
Denition 2.2.8. The triple
_
, T, P
_
comprising a set , a -algebra T of subsets of and a
probability measure P on (, T) is a called a probability space.
Example 2.2.9. A biased coin is tossed once: = H, T, T = , H, T, = 0, 1, P :
T [0, 1] such that P() = 0, P(H) = p [0, 1], P(T) = 1 p, P() = 1.
Example 2.2.10. Take = [0, 1], T = B([0, 1]), P = Leb([0, 1]). Then (, T, P) is a probability
space.
11
2.2.1 Conditional Probability
One of the most important concepts in probability is that of the dependence between events.
Denition 2.2.11. A family A
i
: i I of events is called independent if
P
_

jJ
A
j
_
=
jJ
P(A
j
)
for all nite subsets J of I.
When two events A, B are dependent it is important to know the probability that the event
A will occur, given that B has already happened. We dene this to be conditional probability,
denoted by P(A[B). We know from elementary probability that
P(A[B) =
P(A B)
P(B)
.
A very useful result is that of the total law of probability.
Denition 2.2.12. A family of events B
i
: i I is called a partition of if
B
i
B
j
= , i ,= j and
iI
B
i
= .
Proposition 2.2.13. Law of total probability. For any event A and any partition B
i
: i I
we have
P(A) =

iI
P(A[B
i
)P(B
i
).
The proof of this result is left as an exercise. In many cases the calculation of the probability
of an event is simplied by choosing an appropriate partition of and using the law of total
probability.
Let (, T, P) be a probability space and x B T. Then P([B) denes a probability measure
on T. Indeed, we have that
P([B) = 0, P([B) = 1
and (since A
i
A
j
= implies that (A
i
B) (A
j
B) = )
P(

j=1
A
i
[B) =

j=1
P(A
i
[B),
for a countable family of pairwise disjoint sets A
j

+
j=1
. Consequently, (, T, P([B)) is a proba-
bility space for every B cF.
12
2.3 Random Variables
We are usually interested in the consequences of the outcome of an experiment, rather than the
experiment itself. The function of the outcome of an experiment is a random variable, that is, a
map from to R.
Denition 2.3.1. A sample space equipped with a eld of subsets T is called a measurable
space.
Denition 2.3.2. Let (, T) and (E, () be two measurable spaces. A function X : E such
that the event
: X() A =: X A (2.1)
belongs to T for arbitrary A ( is called a measurable function or random variable.
When E is R equipped with its Borel -algebra, then (2.1) can by replaced with
X x T x R.
Let X be a random variable (measurable function) from (, T, ) to (E, (). If E is a metric space
then we may dene expectation with respect to the measure by
E[X] =
_

X() d().
More generally, let f : E R be (measurable. Then,
E[f(X)] =
_

f(X()) d().
Let U be a topological space. We will use the notation B(U) to denote the Borel algebra of U:
the smallest algebra containing all open sets of U. Every random variable from a probability
space (, T, ) to a measurable space (E, B(E)) induces a probability measure on E:

X
(B) = PX
1
(B) = ( ; X() B), B B(E). (2.2)
The measure
X
is called the distribution (or sometimes the law) of X.
Example 2.3.3. Let 1 denote a subset of the positive integers. A vector
0
=
0,i
, i 1 is a
distribution on 1 if it has nonnegative entries and its total mass equals 1:

i7

0,i
= 1.
13
Consider the case where E = R equipped with the Borel algebra. In this case a random
variable is dened to be a function X : R such that
: X() x T x R.
We can now dene the probability distribution function of X, F
X
: R [0, 1] as
F
X
(x) = P
_ _

X() x
__
=: P(X x). (2.3)
In this case, (R, B(R), F
X
) becomes a probability space.
The distribution function F
X
(x) of a randomvariable has the properties that lim
x
F
X
(x) =
0, lim
x+
F(x) = 1 and is right continuous.
Denition 2.3.4. A random variable X with values on R is called discrete if it takes values in
some countable subset x
0
, x
1
, x
2
, . . . of R. i.e.: P(X = x) ,= x only for x = x
0
, x
1
, . . . .
With a random variable we can associate the probability mass function p
k
= P(X = x
k
).
We will consider nonnegative integer valued discrete random variables. In this case p
k
= P(X =
k), k = 0, 1, 2, . . . .
Example 2.3.5. The Poisson random variable is the nonnegative integer valued random variable
with probability mass function
p
k
= P(X = k) =

k
k!
e

, k = 0, 1, 2, . . . ,
where > 0.
Example 2.3.6. The binomial random variable is the nonnegative integer valued random variable
with probability mass function
p
k
= P(X = k) =
N!
n!(N n)!
p
n
q
Nn
k = 0, 1, 2, . . . N,
where p (0, 1), q = 1 p.
Denition 2.3.7. A random variable X with values on R is called continuous if P(X = x) =
0 x R.
14
Let (, T, P) be a probability space and let X : R be a random variable with distribution
F
X
. This is a probability measure on B(R). We will assume that it is absolutely continuous with
respect to the Lebesgue measure with density
X
: F
X
(dx) = (x) dx. We will call the density
(x) the probability density function (PDF) of the random variable X.
Example 2.3.8. i. The exponential random variable has PDF
f(x) =
_
e
x
x > 0,
0 x < 0,
with > 0.
ii. The uniform random variable has PDF
f(x) =
_
1
ba
a < x < b,
0 x / (a, b),
with a < b.
Denition 2.3.9. Two random variables X and Y are independent if the events [ X()
x and [ Y () y are independent for all x, y R.
Let X, Y be two continuous random variables. We can view them as a random vector, i.e. a
random variable from to R
2
. We can then dene the joint distribution function
F(x, y) = P(X x, Y y).
The mixed derivative of the distribution function f
X,Y
(x, y) :=

2
F
xy
(x, y), if it exists, is called the
joint PDF of the random vector X, Y :
F
X,Y
(x, y) =
_
x

_
y

f
X,Y
(x, y) dxdy.
If the random variables X and Y are independent, then
F
X,Y
(x, y) = F
X
(x)F
Y
(y)
and
f
X,Y
(x, y) = f
X
(x)f
Y
(y).
15
The joint distribution function has the properties
F
X,Y
(x, y) = F
Y,X
(y, x),
F
X,Y
(+, y) = F
Y
(y), f
Y
(y) =
_
+

f
X,Y
(x, y) dx.
We can extend the above denition to random vectors of arbitrary nite dimensions. Let X be
a random variable from (, T, ) to (R
d
, B(R
d
)). The (joint) distribution function F
X
R
d
[0, 1]
is dened as
F
X
(x) = P(X x).
Let X be a random variable in R
d
with distribution function f(x
N
) where x
N
= x
1
, . . . x
N
. We
dene the marginal or reduced distribution function f
N1
(x
N1
) by
f
N1
(x
N1
) =
_
R
f
N
(x
N
) dx
N
.
We can dene other reduced distribution functions:
f
N2
(x
N2
) =
_
R
f
N1
(x
N1
) dx
N1
=
_
R
_
R
f(x
N
) dx
N1
dx
N
.
2.3.1 Expectation of Random Variables
We can use the distribution of a random variable to compute expectations and probabilities:
E[f(X)] =
_
R
f(x) dF
X
(x) (2.4)
and
P[X G] =
_
G
dF
X
(x), G B(E). (2.5)
The above formulas apply to both discrete and continuous random variables, provided that we
dene the integrals in (2.4) and (2.5) appropriately.
When E = R
d
and a PDF exists, dF
X
(x) = f
X
(x) dx, we have
F
X
(x) := P(X x) =
_
x
1

. . .
_
x
d

f
X
(x) dx..
When E = R
d
then by L
p
(; R
d
), or sometimes L
p
(; ) or even simply L
p
(), we mean the
Banach space of measurable functions on with norm
|X|
L
p =
_
E[X[
p
_
1/p
.
16
Let X be a nonnegative integer valued random variable with probability mass function p
k
. We
can compute the expectation of an arbitrary function of X using the formula
E(f(X)) =

k=0
f(k)p
k
.
Let X, Y be random variables we want to know whether they are correlated and, if they are, to
calculate how correlated they are. We dene the covariance of the two random variables as
cov(X, Y ) = E
_
(X EX)(Y EY )

= E(XY ) EXEY.
The correlation coefcient is
(X, Y ) =
cov(X, Y )
_
var(X)
_
var(X)
(2.6)
The Cauchy-Schwarz inequality yields that (X, Y ) [1, 1]. We will say that two random
variables X and Y are uncorrelated provided that (X, Y ) = 0. It is not true in general that
two uncorrelated random variables are independent. This is true, however, for Gaussian random
variables (see Exercise 5).
Example 2.3.10. Consider the random variable X : R with pdf

,b
(x) := (2)

1
2
exp
_

(x b)
2
2
_
.
Such an X is termed a Gaussian or normal random variable. The mean is
EX =
_
R
x
,b
(x) dx = b
and the variance is
E(X b)
2
=
_
R
(x b)
2

,b
(x) dx = .
Let b R
d
and R
dd
be symmetric and positive denite. The random variable X :
R
d
with pdf

,b
(x) :=
_
(2)
d
det
_

1
2
exp
_

1
2

1
(x b), (x b))
_
is termed a multivariate Gaussian or normal random variable. The mean is
E(X) = b (2.7)
17
and the covariance matrix is
E
_
(X b) (X b)
_
= . (2.8)
Since the mean and variance specify completely a Gaussian randomvariable on R, the Gaussian
is commonly denoted by ^(m, ). The standard normal random variable is ^(0, 1). Similarly,
since the mean and covariance matrix completely specify a Gaussian random variable on R
d
, the
Gaussian is commonly denoted by ^(m, ).
Some analytical calculations for Gaussian random variables will be presented in Section 2.6.
2.4 Conditional Expecation
Assume that X L
1
(, T, ) and let ( be a subalgebra of T. The conditional expectation
of X with respect to ( is dened to be the function (random variable) E[X[(] : E which is
(measurable and satises
_
G
E[X[(] d =
_
G
X d G (.
We can dene E[f(X)[(] and the conditional probability P[X F[(] = E[I
F
(X)[(], where I
F
is
the indicator function of F, in a similar manner.
We list some of the most important properties of conditional expectation.
Theorem 2.4.1. [Properties of Conditional Expectation]. Let (, T, ) be a probability space and
let ( be a subalgebra of T.
(a) If X is (measurable and integrable then E(X[() = X.
(b) (Linearity) If X
1
, X
2
are integrable and c
1
, c
2
constants, then
E(c
1
X
1
+ c
2
X
2
[() = c
1
E(X
1
[() + c
2
E(X
2
[().
(c) (Order) If X
1
, X
2
are integrable and X
1
X
2
a.s., then E(X
1
[() E(X
2
[() a.s.
(d) If Y and XY are integrable, and X is (measurable then E(XY [() = XE(Y [().
(e) (Successive smoothing) If T is a subalgebra of T, T ( and X is integrable, then
E(X[T) = E[E(X[()[T] = E[E(X[T)[(].
18
(f) (Convergence) Let X
n

n=1
be a sequence of random variables such that, for all n, [X
n
[ Z
where Z is integrable. If X
n
X a.s., then E(X
n
[() E(X[() a.s. and in L
1
.
Proof. See Exercise 10.
2.5 The Characteristic Function
Many of the properties of (sums of) random variables can be studied using the Fourier transform
of the distribution function. Let F() be the distribution function of a (discrete or continuous)
random variable X. The characteristic function of X is dened to be the Fourier transform of
the distribution function
(t) =
_
R
e
it
dF() = E(e
itX
). (2.9)
For a continuous random variable for which the distribution function F has a density, dF() =
p()d, (2.9) gives
(t) =
_
R
e
it
p() d.
For a discrete random variable for which P(X =
k
) =
k
, (2.9) gives
(t) =

k=0
e
it
k
a
k
.
From the properties of the Fourier transform we conclude that the characteristic function deter-
mines uniquely the distribution function of the random variable, in the sense that there is a one-to-
one correspondance between F() and (t). Furthermore, in the exercises at the end of the chapter
the reader is asked to prove the following two results.
Lemma 2.5.1. Let X
1
, X
2
, . . . X
n
be independent random variables with characteristic func-
tions
j
(t), j = 1, . . . n and let Y =

n
j=1
X
j
with characteristic function
Y
(t). Then

Y
(t) =
n
j=1

j
(t).
Lemma 2.5.2. Let X be a random variable with characteristic function (t) and assume that it
has nite moments. Then
E(X
k
) =
1
i
k

(k)
(0).
19
2.6 Gaussian Random Variables
In this section we present some useful calculations for Gaussian random variables. In particular,
we calculate the normalization constant, the mean and variance and the characteristic function of
multidimensional Gaussian random variables.
Theorem 2.6.1. Let b R
d
and R
dd
a symmetric and positive denite matrix. Let X be the
multivariate Gaussian random variable with probability density function

,b
(x) =
1
Z
exp
_

1
2

1
(x b), x b)
_
.
Then
i. The normalization constant is
Z = (2)
d/2
_
det().
ii. The mean vector and covariance matrix of X are given by
EX = b
and
E((XEX) (XEX)) = .
iii. The characteristic function of X is
(t) = e
ib,t)
1
2
t,t)
.
Proof. i. From the spectral theorem for symmetric positive denite matrices we have that there
exists a diagonal matrix with positive entries and an orthogonal matrix B such that

1
= B
T

1
B.
Let z = x b and y = Bz. We have

1
z, z) = B
T

1
Bz, z)
=
1
Bz, Bz) =
1
y, y)
=
d

i=1

1
i
y
2
i
.
20
Furthermore, we have that det(
1
) =
d
i=1

1
i
, that det() =
d
i=1

i
and that the Jacobian
of an orthogonal transformation is J = det(B) = 1. Hence,
_
R
d
exp
_

1
2

1
(x b), x b)
_
dx =
_
R
d
exp
_

1
2

1
z, z)
_
dz
=
_
R
d
exp
_

1
2
d

i=1

1
i
y
2
i
_
[J[ dy
=
d

i=1
_
R
exp
_

1
2

1
i
y
2
i
_
dy
i
= (2)
d/2

n
i=1

1/2
i
= (2)
d/2
_
det(),
from which we get that
Z = (2)
d/2
_
det().
In the above calculation we have used the elementary calculus identity
_
R
e

x
2
2
dx =
_
2

.
ii. From the above calculation we have that

,b
(x) dx =
,b
(B
T
y +b) dy
=
1
(2)
d/2
_
det()
d

i=1
exp
_

1
2

i
y
2
i
_
dy
i
.
Consequently
EX =
_
R
d
x
,b
(x) dx
=
_
R
d
(B
T
y +b)
,b
(B
T
y +b) dy
= b
_
R
d

,b
(B
T
y +b) dy = b.
We note that, since
1
= B
T

1
B, we have that = B
T
B. Furthermore, z = B
T
y. We
21
calculate
E((X
i
b
i
)(X
j
b
j
)) =
_
R
d
z
i
z
j

,b
(z +b) dz
=
1
(2)
d/2
_
det()
_
R
d

k
B
ki
y
k

m
B
mi
y
m
exp
_

1
2

y
2

_
dy
=
1
(2)
d/2
_
det()

k,m
B
ki
B
mj
_
R
d
y
k
y
m
exp
_

1
2

y
2

_
dy
=

k,m
B
ki
B
mj

km
=
ij
.
iii. Let y be a multivariate Gaussian random variable with mean 0 and covariance I. Let also
C = B

. We have that = CC
T
= C
T
C. We have that
X = CY +b.
To see this, we rst note that X is Gaussian since it is given through a linear transformation
of a Gaussian random variable. Furthermore,
EX = b and E((X
i
b
i
)(X
j
b
j
)) =
ij
.
Now we have:
(t) = Ee
iX,t)
= e
ib,t)
Ee
iCY,t)
= e
ib,t)
Ee
iY,C
T
t)
= e
ib,t)
Ee
i
P
j
(
P
k
C
jk
t
k
)y
j
= e
ib,t)
e

1
2
P
j
[
P
k
C
jk
t
k[
2
= e
ib,t)
e

1
2
Ct,Ct)
= e
ib,t)
e

1
2
t,C
T
Ct)
= e
ib,t)
e

1
2
t,t)
.
Consequently,
(t) = e
ib,t)
1
2
t,t)
.
22
2.7 Types of Convergence and Limit Theorems
One of the most important aspects of the theory of random variables is the study of limit theo-
rems for sums of random variables. The most well known limit theorems in probability theory
are the law of large numbers and the central limit theorem. There are various different types of
convergence for sequences or random variables. We list the most important types of convergence
below.
Denition 2.7.1. Let Z
n

n=1
be a sequence of random variables. We will say that
(a) Z
n
converges to Z with probability one if
P
_
lim
n+
Z
n
= Z
_
= 1.
(b) Z
n
converges to Z in probability if for every > 0
lim
n+
P
_
[Z
n
Z[ >
_
= 0.
(c) Z
n
converges to Z in L
p
if
lim
n+
E
_

Z
n
Z

= 0.
(d) Let F
n
(), n = 1, + , F() be the distribution functions of Z
n
n = 1, + and Z,
respectively. Then Z
n
converges to Z in distribution if
lim
n+
F
n
() = F()
for all R at which F is continuous.
Recall that the distribution function F
X
of a random variable from a probability space (, T, P)
to R induces a probability measure on R and that (R, B(R), F
X
) is a probability space. We can
show that the convergence in distribution is equivalent to the weak convergence of the probability
measures induced by the distribution functions.
Denition 2.7.2. Let (E, d) be a metric space, B(E) the algebra of its Borel sets, P
n
a sequence
of probability measures on (E, B(E)) and let C
b
(E) denote the space of bounded continuous
functions on E. We will say that the sequence of P
n
converges weakly to the probability measure
P if, for each f C
b
(E),
lim
n+
_
E
f(x) dP
n
(x) =
_
E
f(x) dP(x).
23
Theorem2.7.3. Let F
n
(), n = 1, +, F() be the distribution functions of Z
n
n = 1, +
and Z, respectively. Then Z
n
converges to Z in distribution if and only if, for all g C
b
(R)
lim
n+
_
X
g(x) dF
n
(x) =
_
X
g(x) dF(x). (2.10)
Notice that (2.10) is equivalent to
lim
n+
E
n
g(X
n
) = Eg(X),
where E
n
and E denote the expectations with respect to F
n
and F, respectively.
When the sequence of random variables whose convergence we are interested in takes values
in R
d
or, more generally, a metric space space (E, d) then we can use weak convergence of the se-
quence of probability measures induced by the sequence of random variables to dene convergence
in distribution.
Denition 2.7.4. A sequence of real valued random variables X
n
dened on a probability spaces
(
n
, T
n
, P
n
) and taking values on a metric space (E, d) is said to converge in distribution if the
indued measures F
n
(B) = P
n
(X
n
B) for B B(E) converge weakly to a probability measure
P.
Let X
n

n=1
be iid random variables with EX
n
= V . Then, the strong law of large numbers
states that average of the sum of the iid converges to V with probability one:
P
_
lim
N+
1
N
N

n=1
X
n
= V
_
= 1.
The strong law of large numbers provides us with information about the behavior of a sum of
random variables (or, a large number or repetitions of the same experiment) on average. We can
also study uctuations around the average behavior. Indeed, let E(X
n
V )
2
=
2
. Dene the
centered iid randomvariables Y
n
= X
n
V . Then, the sequence of randomvariables
1

N
n=1
Y
n
converges in distribution to a ^(0, 1) random variable:
lim
n+
P
_
1

N
N

n=1
Y
n
a
_
=
_
a

2
e

1
2
x
2
dx.
This is the central limit theorem.
24
2.8 Discussion and Bibliography
The material of this chapter is very standard and can be found in many books on probability theory.
Well known textbooks on probability theory are [8, 23, 24, 56, 57, 48, 90].
The connection between conditional expectation and orthogonal projections is discussed in [13].
The reduced distribution functions dened in Section 2.3 are used extensively in statistical
mechanics. A different normalization is usually used in physics textbooks. See for instance [2,
Sec. 4.2].
The calculations presented in Section 2.6 are essentially an exercise in linear algebra. See [53,
Sec. 10.2].
Random variables and probability measures can also be dened in innite dimensions. More
information can be found in [75, Ch. 2].
The study of limit theorems is one of the cornerstones of probability theory and of the theory
of stochastic processes. A comprehensive study of limit theorems can be found in [43].
2.9 Exercises
1. Show that the intersection of a family of -algebras is a -algebra.
2. Prove the law of total probability, Proposition 2.2.13.
3. Calculate the mean, variance and characteristic function of the following probability density
functions.
(a) The exponential distribution with density
f(x) =
_
e
x
x > 0,
0 x < 0,
with > 0.
(b) The uniform distribution with density
f(x) =
_
1
ba
a < x < b,
0 x / (a, b),
with a < b.
25
(c) The Gamma distribution with density
f(x) =
_

()
(x)
1
e
x
x > 0,
0 x < 0,
with > 0, > 0 and () is the Gamma function
() =
_

0

1
e

d, > 0.
4. Le X and Y be independent random variables with distribution functions F
X
and F
Y
. Show
that the distribution function of the sum Z = X + Y is the convolution of F
X
and F
Y
:
F
Z
(x) =
_
F
X
(x y) dF
Y
(y).
5. Let X and Y be Gaussian random variables. Show that they are uncorrelated if and only if they
are independent.
6. (a) Let X be a continuous random variable with characteristic function (t). Show that
EX
k
=
1
i
k

(k)
(0),
where
(k)
(t) denotes the k-th derivative of evaluated at t.
(b) Let X be a nonnegative random variable with distribution function F(x). Show that
E(X) =
_
+
0
(1 F(x)) dx.
(c) Let X be a continuous random variable with probability density function f(x) and char-
acteristic function (t). Find the probability density and characteristic function of the
random variable Y = aX + b with a, b R.
(d) Let X be a random variable with uniform distribution on [0, 2]. Find the probability
density of the random variable Y = sin(X).
7. Let X be a discrete random variable taking vales on the set of nonnegative integers with prob-
ability mass function p
k
= P(X = k) with p
k
0,

+
k=0
p
k
= 1. The generating function is
dened as
g(s) = E(s
X
) =
+

k=0
p
k
s
k
.
26
(a) Show that
EX = g
t
(1) and EX
2
= g
tt
(1) + g
t
(1),
where the prime denotes differentiation.
(b) Calculate the generating function of the Poisson random variable with
p
k
= P(X = k) =
e

k
k!
, k = 0, 1, 2, . . . and > 0.
(c) Prove that the generating function of a sum of independent nonnegative integer valued
random variables is the product of their generating functions.
8. Write a computer program for studying the law of large numbers and the central limit theorem.
Investigate numerically the rate of convergence of these two theorems.
9. Study the properties of Gaussian measures on separable Hilbert spaces from [75, Ch. 2].
10. . Prove Theorem 2.4.1.
27
28
Chapter 3
Basics of the Theory of Stochastic Processes
3.1 Introduction
In this chapter we present some basic results form the theory of stochastic processes and we inves-
tigate the properties of some of the standard stochastic processes in continuous time. In Section 3.2
we give the denition of a stochastic process. In Section 3.3 we present some properties of sta-
tionary stochastic processes. In Section 3.4 we introduce Brownian motion and study some of
its properties. Various examples of stochastic processes in continuous time are presented in Sec-
tion 3.5. The Karhunen-Loeve expansion, one of the most useful tools for representing stochastic
processes and random elds, is presented in Section 3.6. Further discussion and bibliographical
comments are presented in Section 3.7. Section 3.8 contains exercises.
3.2 Denition of a Stochastic Process
Stochastic processes describe dynamical systems whose evolution law is of probabilistic nature.
The precise denition is given below.
Denition 3.2.1. Let T be an ordered set, (, T, P) a probability space and (E, () a measurable
space. A stochastic process is a collection of random variables X = X
t
; t T where, for each
xed t T, X
t
is a random variable from (, T, P) to (E, (). is called the sample space. and
E is the state space of the stochastic process X
t
.
The set T can be either discrete, for example the set of positive integers Z
+
, or continuous,
T = [0, +). The state space E will usually be R
d
equipped with the algebra of Borel sets.
29
A stochastic process X may be viewed as a function of both t T and . We will
sometimes write X(t), X(t, ) or X
t
() instead of X
t
. For a xed sample point , the
function X
t
() : T E is called a sample path (realization, trajectory) of the process X.
Denition 3.2.2. The nite dimensional distributions (fdd) of a stochastic process are the dis-
tributions of the E
k
valued random variables (X(t
1
), X(t
2
), . . . , X(t
k
)) for arbitrary positive
integer k and arbitrary times t
i
T, i 1, . . . , k:
F(x) = P(X(t
i
) x
i
, i = 1, . . . , k)
with x = (x
1
, . . . , x
k
).
From experiments or numerical simulations we can only obtain information about the nite
dimensional distributions of a process. A natural question arises: are the nite dimensional distri-
butions of a stochastic process sufcient to determine a stochastic process uniquely? This is true
for processes with continuous paths
1
. This is the class of stochastic processes that we will study
in these notes.
Denition 3.2.3. We will say that two processes X
t
and Y
t
are equivalent if they have same nite
dimensional distributions.
Denition 3.2.4. A one dimensional Gaussian process is a continuous time stochastic process for
which E = Rand all the nite dimensional distributions are Gaussian, i.e. every nite dimensional
vector (X
t
1
, X
t
2
, . . . , X
t
k
) is a ^(
k
, K
k
) random variable for some vector
k
and a symmetric
nonnegative denite matrix K
k
for all k = 1, 2, . . . and for all t
1
, t
2
, . . . , t
k
.
From the above denition we conclude that the Finite dimensional distributions of a Gaussian
continuous time stochastic process are Gaussian with PFG

k
,K
k
(x) = (2)
n/2
(detK
k
)
1/2
exp
_

1
2
K
1
k
(x
k
), x
k
)
_
,
where x = (x
1
, x
2
, . . . x
k
).
It is straightforward to extend the above denition to arbitrary dimensions. A Gaussian process
x(t) is characterized by its mean
m(t) := Ex(t)
1
In fact, what we need is the stochastic process to be separable. See the discussion in Section 3.7
30
and the covariance (or autocorrelation) matrix
C(t, s) = E
_
_
x(t) m(t)
_

_
x(s) m(s)
_
_
.
Thus, the rst two moments of a Gaussian process are sufcient for a complete characterization of
the process.
3.3 Stationary Processes
3.3.1 Strictly Stationary Processes
In many stochastic processes that appear in applications their statistics remain invariant under time
translations. Such stochastic processes are called stationary. It is possible to develop a quite
general theory for stochastic processes that enjoy this symmetry property.
Denition 3.3.1. A stochastic process is called (strictly) stationary if all nite dimensional dis-
tributions are invariant under time translation: for any integer k and times t
i
T, the distribution
of (X(t
1
), X(t
2
), . . . , X(t
k
)) is equal to that of (X(s + t
1
), X(s + t
2
), . . . , X(s + t
k
)) for any s
such that s + t
i
T for all i 1, . . . , k. In other words,
P(X
t
1
+t
A
1
, X
t
2
+t
A
2
. . . X
t
k
+t
A
k
) = P(X
t
1
A
1
, X
t
2
A
2
. . . X
t
k
A
k
), t T.
Example 3.3.2. Let Y
0
, Y
1
, . . . be a sequence of independent, identically distributed random vari-
ables and consider the stochastic process X
n
= Y
n
. Then X
n
is a strictly stationary process (see
Exercise 1). Assume furthermore that EY
0
= < +. Then, by the strong law of large numbers,
we have that
1
N
N1

j=0
X
j
=
1
N
N1

j=0
Y
j
EY
0
= ,
almost surely. In fact, Birkhoffs ergodic theorem states that, for any function f such that
Ef(Y
0
) < +, we have that
lim
N+
1
N
N1

j=0
f(X
j
) = Ef(Y
0
), (3.1)
almost surely. The sequence of iid random variables is an example of an ergodic strictly stationary
processes.
31
Ergodic strictly stationary processes satisfy (3.1) Hence, we can calculate the statistics of a
sequence stochastic process X
n
using a single sample path, provided that it is long enough (N
1).
Example 3.3.3. Let Z be a random variable and dene the stochastic process X
n
= Z, n =
0, 1, 2, . . . . Then X
n
is a strictly stationary process (see Exercise 2). We can calculate the long
time average of this stochastic process:
1
N
N1

j=0
X
j
=
1
N
N1

j=0
Z = Z,
which is independent of N and does not converge to the mean of the stochastic processes EX
n
=
EZ (assuming that it is nite), or any other deterministic number. This is an example of a non-
ergodic processes.
3.3.2 Second Order Stationary Processes
Let
_
, T, P
_
be a probability space. Let X
t
, t T (with T = R or Z) be a real-valued random
process on this probability space with nite second moment, E[X
t
[
2
< + (i.e. X
t
L
2
(, P)
for all t T ). Assume that it is strictly stationary. Then,
E(X
t+s
) = EX
t
, s T (3.2)
from which we conclude that EX
t
is constant. and
E((X
t
1
+s
)(X
t
2
+s
)) = E((X
t
1
)(X
t
2
)), s T (3.3)
from which we conclude that the covariance or autocorrelation or correlation function C(t, s) =
E((X
t
)(X
s
)) depends on the difference between the two times, t and s, i.e. C(t, s) =
C(t s). This motivates the following denition.
Denition 3.3.4. A stochastic process X
t
L
2
is called second-order stationary or wide-sense
stationary or weakly stationary if the rst moment EX
t
is a constant and the covariance function
E(X
t
)(X
s
) depends only on the difference t s:
EX
t
= , E((X
t
)(X
s
)) = C(t s).
32
The constant is the expectation of the process X
t
. Without loss of generality, we can set
= 0, since if EX
t
= then the process Y
t
= X
t
is mean zero. A mean zero process
with be called a centered process. The function C(t) is the covariance (sometimes also called
autocovariance) or the autocorrelation function of the X
t
. Notice that C(t) = E(X
t
X
0
), whereas
C(0) = E(X
2
t
), which is nite, by assumption. Since we have assumed that X
t
is a real valued
process, we have that C(t) = C(t), t R.
Remark 3.3.5. Let X
t
be a strictly stationary stochastic process with nite second moment (i.e.
X
t
L
2
). The denition of strict stationarity implies that EX
t
= , a constant, and E((X
t

)(X
s
)) = C(t s). Hence, a strictly stationary process with nite second moment is also
stationary in the wide sense. The converse is not true.
Example 3.3.6.
Let Y
0
, Y
1
, . . . be a sequence of independent, identically distributed random variables and con-
sider the stochastic process X
n
= Y
n
. From Example 3.3.2 we know that this is a strictly station-
ary process, irrespective of whether Y
0
is such that EY
2
0
< +. Assume now that EY
0
= 0 and
EY
2
0
=
2
< +. Then X
n
is a second order stationary process with mean zero and correlation
function R(k) =
2

k0
. Notice that in this case we have no correlation between the values of the
stochastic process at different times n and k.
Example 3.3.7. Let Z be a single random variable and consider the stochastic process X
n
=
Z, n = 0, 1, 2, . . . . From Example 3.3.3 we know that this is a strictly stationary process irrespec-
tive of whether E[Z[
2
< + or not. Assume now that EZ = 0, EZ
2
=
2
. Then X
n
becomes
a second order stationary process with R(k) =
2
. Notice that in this case the values of our
stochastic process at different times are strongly correlated.
We will see in Section 3.3.3 that for second order stationary processes, ergodicity is related to
fast decay of correlations. In the rst of the examples above, there was no correlation between our
stochastic processes at different times and the stochastic process is ergodic. On the contrary, in our
second example there is very strong correlation between the stochastic process at different times
and this process is not ergodic.
Remark 3.3.8. The rst two moments of a Gaussian process are sufcient for a complete charac-
terization of the process. Consequently, a Gaussian stochastic process is strictly stationary if and
only if it is weakly stationary.
33
Continuity properties of the covariance function are equivalent to continuity properties of the
paths of X
t
in the L
2
sense, i.e.
lim
h0
E[X
t+h
X
t
[
2
= 0.
Lemma 3.3.9. Assume that the covariance function C(t) of a second order stationary process is
continuous at t = 0. Then it is continuous for all t R. Furthermore, the continuity of C(t) is
equivalent to the continuity of the process X
t
in the L
2
-sense.
Proof. Fix t R and (without loss of generality) set EX
t
= 0. We calculate:
[C(t + h) C(t)[
2
= [E(X
t+h
X
0
) E(X
t
X
0
)[
2
= E[((X
t+h
X
t
)X
0
)[
2
E(X
0
)
2
E(X
t+h
X
t
)
2
= C(0)(EX
2
t+h
+EX
2
t
2EX
t
X
t+h
)
= 2C(0)(C(0) C(h)) 0,
as h 0. Thus, continuity of C() at 0 implies continuity for all t.
Assume now that C(t) is continuous. From the above calculation we have
E[X
t+h
X
t
[
2
= 2(C(0) C(h)), (3.4)
which converges to 0 as h 0. Conversely, assume that X
t
is L
2
-continuous. Then, from the
above equation we get lim
h0
C(h) = C(0).
Notice that form (3.4) we immediately conclude that C(0) > C(h), h R.
The Fourier transform of the covariance function of a second order stationary process always
exists. This enables us to study second order stationary processes using tools from Fourier analysis.
To make the link between second order stationary processes and Fourier analysis we will use
Bochners theorem, which applies to all nonnegative functions.
Denition 3.3.10. A function f(x) : R R is called nonnegative denite if
n

i,j=1
f(t
i
t
j
)c
i
c
j
0 (3.5)
for all n N, t
1
, . . . t
n
R, c
1
, . . . c
n
C.
34
Lemma 3.3.11. The covariance function of second order stationary process is a nonnegative de-
nite function.
Proof. We will use the notation X
c
t
:=

n
i=1
X
t
i
c
i
. We have.
n

i,j=1
C(t
i
t
j
)c
i
c
j
=
n

i,j=1
EX
t
i
X
t
j
c
i
c
j
= E
_
n

i=1
X
t
i
c
i
n

j=1
X
t
j
c
j
_
= E
_
X
c
t

X
c
t
_
= E[X
c
t
[
2
0.
Theorem 3.3.12. (Bochner) Let C(t) be a continuous positive denite function. Then there exists
a unique nonnegative measure on R such that (R) = C(0) and
C(t) =
_
R
e
ixt
(dx) t R. (3.6)
Denition 3.3.13. Let X
t
be a second order stationary process with autocorrelation function C(t)
whose Fourier transform is the measure (dx). The measure (dx) is called the spectral measure
of the process X
t
.
In the following we will assume that the spectral measure is absolutely continuous with respect
to the Lebesgue measure on R with density f(x), i.e. (dx) = f(x)dx. The Fourier transform
f(x) of the covariance function is called the spectral density of the process:
f(x) =
1
2
_

e
itx
C(t) dt.
From (3.6) it follows that that the autocorrelation function of a mean zero, second order stationary
process is given by the inverse Fourier transform of the spectral density:
C(t) =
_

e
itx
f(x) dx. (3.7)
There are various cases where the experimentally measured quantity is the spectral density (or
power spectrum) of a stationary stochastic process. Conversely, from a time series of observations
of a stationary processes we can calculate the autocorrelation function and, using (3.7) the spectral
density.
35
The autocorrelation function of a second order stationary process enables us to associate a time
scale to X
t
, the correlation time
cor
:

cor
=
1
C(0)
_

0
C() d =
_

0
E(X

X
0
)/E(X
2
0
) d.
The slower the decay of the correlation function, the larger the correlation time is. Notice that
when the correlations do not decay sufciently fast so that C(t) is integrable, then the correlation
time will be innite.
Example 3.3.14. Consider a mean zero, second order stationary process with correlation function
R(t) = R(0)e
[t[
(3.8)
where > 0. We will write R(0) =
D

where D > 0. The spectral density of this process is:


f(x) =
1
2
D

_
+

e
ixt
e
[t[
dt
=
1
2
D

__
0

e
ixt
e
t
dt +
_
+
0
e
ixt
e
t
dt
_
=
1
2
D

_
1
ix +
+
1
ix +
_
=
D

1
x
2
+
2
.
This function is called the Cauchy or the Lorentz distribution. The correlation time is (we have
that R(0) = D/)

cor
=
_

0
e
t
dt =
1
.
A Gaussian process with an exponential correlation function is of particular importance in the
theory and applications of stochastic processes.
Denition 3.3.15. A real-valued Gaussian stationary process dened on R with correlation func-
tion given by (3.8) is called the (stationary) Ornstein-Uhlenbeck process.
The Ornstein Uhlenbeck process is used as a model for the velocity of a Brownian particle. It
is of interest to calculate the statistics of the position of the Brownian particle, i.e. of the integral
X(t) =
_
t
0
Y (s) ds, (3.9)
where Y (t) denotes the stationary OU process.
36
Lemma 3.3.16. Let Y (t) denote the stationary OU process with covariance function (3.8) and set
= D = 1. Then the position process (3.9) is a mean zero Gaussian process with covariance
function
E(X(t)X(s)) = 2 min(t, s) + e
min(t,s)
+ e
max(t,s)
e
[ts[
1. (3.10)
Proof. See Exercise 8.
3.3.3 Ergodic Properties of Second-Order Stationary Processes
Second order stationary processes have nice ergodic properties, provided that the correlation be-
tween values of the process at different times decays sufciently fast. In this case, it is possible to
show that we can calculate expectations by calculating time averages. An example of such a result
is the following.
Theorem3.3.17. Let X
t

t0
be a second order stationary process on a probability space , T, P
with mean and covariance R(t), and assume that R(t) L
1
(0, +). Then
lim
T+
E

1
T
_
T
0
X(s) ds

2
= 0. (3.11)
For the proof of this result we will rst need an elementary lemma.
Lemma 3.3.18. Let R(t) be an integrable symmetric function. Then
_
T
0
_
T
0
R(t s) dtds = 2
_
T
0
(T s)R(s) ds. (3.12)
Proof. We make the change of variables u = t s, v = t +s. The domain of integration in the t, s
variables is [0, T] [0, T]. In the u, v variables it becomes [T, T] [0, 2(T [u[)]. The Jacobian
of the transformation is
J =
(t, s)
(u, v)
=
1
2
.
The integral becomes
_
T
0
_
T
0
R(t s) dtds =
_
T
T
_
2(T[u[)
0
R(u)J dvdu
=
_
T
T
(T [u[)R(u) du
= 2
_
T
0
(T u)R(u) du,
where the symmetry of the function R(u) was used in the last step.
37
Proof of Theorem 3.3.17. We use Lemma (3.3.18) to calculate:
E

1
T
_
T
0
X
s
ds

2
=
1
T
2
E

_
T
0
(X
s
) ds

2
=
1
T
2
E
_
T
0
_
T
0
(X(t) )(X(s) ) dtds
=
1
T
2
_
T
0
_
T
0
R(t s) dtds
=
2
T
2
_
T
0
(T u)R(u) du

2
T
_
+
0

_
1
u
T
_
R(u)

du
2
T
_
+
0
R(u) du 0,
using the dominated convergence theorem and the assumption R() L
1
.
Assume that = 0 and dene
D =
_
+
0
R(t) dt, (3.13)
which, from our assumption on R(t), is a nite quantity.
2
The above calculation suggests that, for
T 1, we have that
E
__
t
0
X(t) dt
_
2
2DT.
This implies that, at sufciently long times, the mean square displacement of the integral of the
ergodic second order stationary process X
t
scales linearly in time, with proportionality coefcient
2D.
Assume that X
t
is the velocity of a (Brownian) particle. In this case, the integral of X
t
Z
t
=
_
t
0
X
s
ds,
represents the particle position. From our calculation above we conclude that
EZ
2
t
= 2Dt.
where
D =
_

0
R(t) dt =
_

0
E(X
t
X
0
) dt (3.14)
is the diffusion coefcient. Thus, one expects that at sufciently long times and under appropriate
assumptions on the correlation function, the time integral of a stationary process will approximate
2
Notice however that we do not know whether it is nonzero. This requires a separate argument.
38
a Brownian motion with diffusion coefcient D. The diffusion coefcient is an example of a
transport coefcient and (3.14) is an example of the Green-Kubo formula: a transport coefcient
can be calculated in terms of the time integral of an appropriate autocorrelation function. In the
case of the diffusion coefcient we need to calculate the integral of the velocity autocorrelation
function.
Example 3.3.19. Consider the stochastic processes with an exponential correlation function from
Example 3.3.14, and assume that this stochastic process describes the velocity of a Brownian
particle. Since R(t) L
1
(0, +) Theorem 3.3.17 applies. Furthermore, the diffusion coefcient
of the Brownian particle is given by
_
+
0
R(t) dt = R(0)
1
c
=
D

2
.
3.4 Brownian Motion
The most important continuous time stochastic process is Brownian motion. Brownian motion
is a mean zero, continuous (i.e. it has continuous sample paths: for a.e the function X
t
is
a continuous function of time) process with independent Gaussian increments. A process X
t
has
independent increments if for every sequence t
0
< t
1
< . . . t
n
the random variables
X
t
1
X
t
0
, X
t
2
X
t
1
, . . . , X
t
n
X
t
n1
are independent. If, furthermore, for any t
1
, t
2
, s T and Borel set B R
P(X
t
2
+s
X
t
1
+s
B) = P(X
t
2
X
t
1
B)
then the process X
t
has stationary independent increments.
Denition 3.4.1. A one dimensional standard Brownian motion W(t) : R
+
R is a real
valued stochastic process such that
i. W(0) = 0.
ii. W(t) has independent increments.
iii. For every t > s 0 W(t) W(s) has a Gaussian distribution with mean 0 and
variance t s. That is, the density of the random variable W(t) W(s) is
g(x; t, s) =
_
2(t s)
_

1
2
exp
_

x
2
2(t s)
_
; (3.15)
39
A ddimensional standard Brownian motion W(t) : R
+
R
d
is a collection of d indepen-
dent one dimensional Brownian motions:
W(t) = (W
1
(t), . . . , W
d
(t)),
where W
i
(t), i = 1, . . . , d are independent one dimensional Brownian motions. The density
of the Gaussian random vector W(t) W(s) is thus
g(x; t, s) =
_
2(t s)
_
d/2
exp
_

|x|
2
2(t s)
_
.
Brownian motion is sometimes referred to as the Wiener process .
Brownian motion has continuous paths. More precisely, it has a continuous modication.
Denition 3.4.2. Let X
t
and Y
t
, t T, be two stochastic processes dened on the same probability
space (, T, P). The process Y
t
is said to be a modication of X
t
if P(X
t
= Y
t
) = 1 t T.
Lemma 3.4.3. There is a continuous modication of Brownian motion.
This follows from a theorem due to Kolmogorov.
Theorem 3.4.4. (Kolmogorov) Let X
t
, t [0, ) be a stochastic process on a probability space
, T, P. Suppose that there are positive constants and , and for each T 0 there is a
constant C(T) such that
E[X
t
X
s
[

C(T)[t s[
1+
, 0 s, t T. (3.16)
Then there exists a continuous modication Y
t
of the process X
t
.
The proof of Lemma 3.4.3 is left as an exercise.
Remark 3.4.5. Equivalently, we could have dened the one dimensional standard Brownian mo-
tion as a stochastic process on a probability space
_
, T, P
_
with continuous paths for almost all
, and Gaussian nite dimensional distributions with zero mean and covariance E(W
t
i
W
t
j
) =
min(t
i
, t
j
). One can then show that Denition 3.4.1 follows from the above denition.
It is possible to prove rigorously the existence of the Wiener process (Brownian motion):
40
Figure 3.1: Brownian sample paths
Theorem 3.4.6. (Wiener) There exists an almost-surely continuous process W
t
with independent
increments such and W
0
= 0, such that for each t 0 the random variable W
t
is ^(0, t).
Furthermore, W
t
is almost surely locally H older continuous with exponent for any (0,
1
2
).
Notice that Brownian paths are not differentiable.
We can also construct Brownian motion through the limit of an appropriately rescaled random
walk: let X
1
, X
2
, . . . be iid random variables on a probability space (, T, P) with mean 0 and
variance 1. Dene the discrete time stochastic process S
n
with S
0
= 0, S
n
=

j=1
X
j
, n 1.
Dene now a continuous time stochastic process with continuous paths as the linearly interpolated,
appropriately rescaled random walk:
W
n
t
=
1

n
S
[nt]
+ (nt [nt])
1

n
X
[nt]+1
,
where [] denotes the integer part of a number. Then W
n
t
converges weakly, as n +to a one
dimensional standard Brownian motion.
Brownian motion is a Gaussian process. For the ddimensional Brownian motion, and for I
the d d dimensional identity, we have (see (2.7) and (2.8))
EW(t) = 0 t 0
and
E
_
(W(t) W(s)) (W(t) W(s))
_
= (t s)I. (3.17)
41
Moreover,
E
_
W(t) W(s)
_
= min(t, s)I. (3.18)
From the formula for the Gaussian density g(x, t s), eqn. (3.15), we immediately conclude that
W(t) W(s) and W(t +u) W(s +u) have the same pdf. Consequently, Brownian motion has
stationary increments. Notice, however, that Brownian motion itself is not a stationary process.
Since W(t) = W(t) W(0), the pdf of W(t) is
g(x, t) =
1

2t
e
x
2
/2t
.
We can easily calculate all moments of the Brownian motion:
E(x
n
(t)) =
1

2t
_
+

x
n
e
x
2
/2t
dx
=
_
1.3 . . . (n 1)t
n/2
, neven,
0, nodd.
Brownian motion is invariant under various transformations in time.
Theorem 3.4.7. . Let W
t
denote a standard Brownian motion in R. Then, W
t
has the following
properties:
i. (Rescaling). For each c > 0 dene X
t
=
1

c
W(ct). Then (X
t
, t 0) = (W
t
, t 0) in law.
ii. (Shifting). For each c > 0 W
c+t
W
c
, t 0 is a Brownian motion which is independent of
W
u
, u [0, c].
iii. (Time reversal). Dene X
t
= W
1t
W
1
, t [0, 1]. Then (X
t
, t [0, 1]) = (W
t
, t [0, 1])
in law.
iv. (Inversion). Let X
t
, t 0 dened by X
0
= 0, X
t
= tW(1/t). Then (X
t
, t 0) =
(W
t
, t 0) in law.
We emphasize that the equivalence in the above theorem holds in law and not in a pathwise
sense.
Proof. See Exercise 13.
42
We can also add a drift and change the diffusion coefcient of the Brownian motion: we will
dene a Brownian motion with drift and variance
2
as the process
X
t
= t + W
t
.
The mean and variance of X
t
are
EX
t
= t, E(X
t
EX
t
)
2
=
2
t.
Notice that X
t
satises the equation
dX
t
= dt + dW
t
.
This is the simplest example of a stochastic differential equation.
We can dene the OU process through the Brownian motion via a time change.
Lemma 3.4.8. Let W(t) be a standard Brownian motion and consider the process
V (t) = e
t
W(e
2t
).
Then V (t) is a Gaussian stationary process with mean 0 and correlation function
R(t) = e
[t[
. (3.19)
For the proof of this result we rst need to show that time changed Gaussian processes are also
Gaussian.
Lemma 3.4.9. Let X(t) be a Gaussian stochastic process and let Y (t) = X(f(t)) where f(t) is a
strictly increasing function. Then Y (t) is also a Gaussian process.
Proof. We need to show that, for all positive integers N and all sequences of times t
1
, t
2
, . . . t
N

the random vector


Y (t
1
), Y (t
2
), . . . Y (t
N
) (3.20)
is a multivariate Gaussian random variable. Since f(t) is strictly increasing, it is invertible and
hence, there exist s
i
, i = 1, . . . N such that s
i
= f
1
(t
i
). Thus, the random vector (3.20) can be
rewritten as
X(s
1
), X(s
2
), . . . X(s
N
),
which is Gaussian for all N and all choices of times s
1
, s
2
, . . . s
N
. Hence Y (t) is also Gaussian.
43
Proof of Lemma 3.4.8. The fact that V (t) is mean zero follows immediately from the fact that
W(t) is mean zero. To show that the correlation function of V (t) is given by (3.19), we calculate
E(V (t)V (s)) = e
ts
E(W(e
2t
)W(e
2s
)) = e
ts
min(e
2t
, e
2s
)
= e
[ts[
.
The Gaussianity of the process V (t) follows from Lemma 3.4.9 (notice that the transformation that
gives V (t) in terms of W(t) is invertible and we can write W(s) = s
1/2
V (
1
2
ln(s))).
3.5 Other Examples of Stochastic Processes
3.5.1 Brownian Bridge
Let W(t) be a standard one dimensional Brownian motion. We dene the Brownian bridge (from
0 to 0) to be the process
B
t
= W
t
tW
1
, t [0, 1]. (3.21)
Notice that B
0
= B
1
= 0. Equivalently, we can dene the Brownian bridge to be the continuous
Gaussian process B
t
: 0 t 1 such that
EB
t
= 0, E(B
t
B
s
) = min(s, t) st, s, t [0, 1]. (3.22)
Another, equivalent denition of the Brownian bridge is through an appropriate time change of the
Brownian motion:
B
t
= (1 t)W
_
t
1 t
_
, t [0, 1). (3.23)
Conversely, we can write the Brownian motion as a time change of the Brownian bridge:
W
t
= (t + 1)B
_
t
1 + t
_
, t 0.
3.5.2 Fractional Brownian Motion
Denition 3.5.1. A (normalized) fractional Brownian motion W
H
t
, t 0 with Hurst parameter
H (0, 1) is a centered Gaussian process with continuous sample paths whose covariance is
given by
E(W
H
t
W
H
s
) =
1
2
_
s
2H
+ t
2H
[t s[
2H
_
. (3.24)
44
Proposition 3.5.2. Fractional Brownian motion has the following properties.
i. When H =
1
2
, W
1
2
t
becomes the standard Brownian motion.
ii. W
H
0
= 0, EW
H
t
= 0, E(W
H
t
)
2
= [t[
2H
, t 0.
iii. It has stationary increments, E(W
H
t
W
H
s
)
2
= [t s[
2H
.
iv. It has the following self similarity property
(W
H
t
, t 0) = (
H
W
H
t
, t 0), > 0, (3.25)
where the equivalence is in law.
Proof. See Exercise 19
3.5.3 The Poisson Process
Another fundamental continuous time process is the Poisson process :
Denition 3.5.3. The Poisson process with intensity , denoted by N(t), is an integer-valued,
continuous time, stochastic process with independent increments satisfying
P[(N(t) N(s)) = k] =
e
(ts)
_
(t s)
_
k
k!
, t > s 0, k N.
The Poisson process does not have a continuous modication. See Exercise 20.
3.6 The Karhunen-Lo eve Expansion
Let f L
2
() where is a subset of R
d
and let e
n

n=1
be an orthonormal basis in L
2
(). Then,
it is well known that f can be written as a series expansion:
f =

n=1
f
n
e
n
,
where
f
n
=
_

f(x)e
n
(x) dx.
45
The convergence is in L
2
():
lim
N
_
_
_
_
_
f(x)
N

n=1
f
n
e
n
(x)
_
_
_
_
_
L
2
()
= 0.
It turns out that we can obtain a similar expansion for an L
2
mean zero process which is continuous
in the L
2
sense:
EX
2
t
< +, EX
t
= 0, lim
h0
E[X
t+h
X
t
[
2
= 0. (3.26)
For simplicity we will take T = [0, 1]. Let R(t, s) = E(X
t
X
s
) be the autocorrelation function.
Notice that from (3.26) it follows that R(t, s) is continuous in both t and s (exercise 21).
Let us assume an expansion of the form
X
t
() =

n=1

n
()e
n
(t), t [0, 1] (3.27)
where e
n

n=1
is an orthonormal basis in L
2
(0, 1). The random variables
n
are calculated as
_
1
0
X
t
e
k
(t) dt =
_
1
0

n=1

n
e
n
(t)e
k
(t) dt
=

n=1

nk
=
k
,
where we assumed that we can interchange the summation and integration. We will assume that
these random variables are orthogonal:
E(
n

m
) =
n

nm
,
where
n

n=1
are positive numbers that will be determined later.
Assuming that an expansion of the form (3.27) exists, we can calculate
R(t, s) = E(X
t
X
s
) = E
_

k=1

=1

k
e
k
(t)

(s)
_
=

k=1

=1
E(
k

) e
k
(t)e

(s)
=

k=1

k
e
k
(t)e
k
(s).
46
Consequently, in order to the expansion (3.27) to be valid we need
R(t, s) =

k=1

k
e
k
(t)e
k
(s). (3.28)
From equation (3.28) it follows that
_
1
0
R(t, s)e
n
(s) ds =
_
1
0

k=1

k
e
k
(t)e
k
(s)e
n
(s) ds
=

k=1

k
e
k
(t)
_
1
0
e
k
(s)e
n
(s) ds
=

k=1

k
e
k
(t)
kn
=
n
e
n
(t).
Hence, in order for the expansion (3.27) to be valid,
n
, e
n
(t)

n=1
have to be the eigenvalues and
eigenfunctions of the integral operator whose kernel is the correlation function of X
t
:
_
1
0
R(t, s)e
n
(s) ds =
n
e
n
(t). (3.29)
Hence, in order to prove the expansion (3.27) we need to study the eigenvalue problem for the
integral operator 1 : L
2
[0, 1] L
2
[0, 1]. It easy to check that this operator is self-adjoint
((1f, h) = (f, 1h) for all f, h L
2
(0, 1)) and nonnegative (1f, f 0 for all f L
2
(0, 1)).
Hence, all its eigenvalues are real and nonnegative. Furthermore, it is a compact operator (if

n=1
is a bounded sequence in L
2
(0, 1), then 1
n

n=1
has a convergent subsequence). The
spectral theorem for compact, self-adjoint operators implies that 1 has a countable sequence of
eigenvalues tending to 0. Furthermore, for every f L
2
(0, 1) we can write
f = f
0
+

n=1
f
n
e
n
(t),
where 1f
0
= 0, e
n
(t) are the eigenfunctions of 1corresponding to nonzero eigenvalues and the
convergence is in L
2
. Finally, Mercers Theoremstates that for R(t, s) continuous on [0, 1][0, 1],
the expansion (3.28) is valid, where the series converges absolutely and uniformly.
Now we are ready to prove (3.27).
47
Theorem 3.6.1. (Karhunen-Lo eve). Let X
t
, t [0, 1] be an L
2
process with zero mean and
continuous correlation function R(t, s). Let
n
, e
n
(t)

n=1
be the eigenvalues and eigenfunctions
of the operator 1dened in (3.35). Then
X
t
=

n=1

n
e
n
(t), t [0, 1], (3.30)
where

n
=
_
1
0
X
t
e
n
(t) dt, E
n
= 0, E(
n

m
) =
nm
. (3.31)
The series converges in L
2
to X(t), uniformly in t.
Proof. The fact that E
n
= 0 follows from the fact that X
t
is mean zero. The orthogonality of the
random variables
n

n=1
follows from the orthogonality of the eigenfunctions of 1:
E(
n

m
) = E
_
1
0
_
1
0
X
t
X
s
e
n
(t)e
m
(s) dtds
=
_
1
0
_
1
0
R(t, s)e
n
(t)e
m
(s) dsdt
=
n
_
1
0
e
n
(s)e
m
(s) ds
=
n

nm
.
Consider now the partial sum S
N
=

N
n=1

n
e
n
(t).
E[X
t
S
N
[
2
= EX
2
t
+ES
2
N
2E(X
t
S
N
)
= R(t, t) +E
N

k,=1

e
k
(t)e

(t) 2E
_
X
t
N

n=1

n
e
n
(t)
_
= R(t, t) +
N

k=1

k
[e
k
(t)[
2
2E
N

k=1
_
1
0
X
t
X
s
e
k
(s)e
k
(t) ds
= R(t, t)
N

k=1

k
[e
k
(t)[
2
0,
by Mercers theorem.
Remark 3.6.2. Let X
t
be a Gaussian second order process with continuous covariance R(t, s).
Then the random variables
k

k=1
are Gaussian, since they are dened through the time integral
48
of a Gaussian processes. Furthermore, since they are Gaussian and orthogonal, they are also
independent. Hence, for Gaussian processes the Karhunen-Lo eve expansion becomes:
X
t
=
+

k=1
_

k
e
k
(t), (3.32)
where
k

k=1
are independent ^(0, 1) random variables.
Example 3.6.3. The Karhunen-Lo eve Expansion for Brownian Motion. The correlation func-
tion of Brownian motion is R(t, s) = min(t, s). The eigenvalue problem 1
n
=
n

n
becomes
_
1
0
min(t, s)
n
(s) ds =
n

n
(t).
Let us assume that
n
> 0 (it is easy to check that 0 is not an eigenvalue). Upon setting t = 0 we
obtain
n
(0) = 0. The eigenvalue problem can be rewritten in the form
_
t
0
s
n
(s) ds + t
_
1
t

n
(s) ds =
n

n
(t).
We differentiate this equation once:
_
1
t

n
(s) ds =
n

t
n
(t).
We set t = 1 in this equation to obtain the second boundary condition
t
n
(1) = 0. A second
differentiation yields;

n
(t) =
n

tt
n
(t),
where primes denote differentiation with respect to t. Thus, in order to calculate the eigenvalues
and eigenfunctions of the integral operator whose kernel is the covariance function of Brownian
motion, we need to solve the Sturm-Liouville problem

n
(t) =
n

tt
n
(t), (0) =
t
(1) = 0.
It is easy to check that the eigenvalues and (normalized) eigenfunctions are

n
(t) =

2 sin
_
1
2
(2n 1)t
_
,
n
=
_
2
(2n 1)
_
2
.
Thus, the Karhunen-Lo eve expansion of Brownian motion on [0, 1] is
W
t
=

n=1

n
2
(2n 1)
sin
_
1
2
(2n 1)t
_
. (3.33)
49
We can use the KL expansion in order to study the L
2
-regularity of stochastic processes. First,
let R be a compact, symmetric positive denite operator on L
2
(0, 1) with eigenvalues and normal-
ized eigenfunctions
k
, e
k
(x)
+
k=1
and consider a function f L
2
(0, 1) with
_
1
0
f(s) ds = 0. We
can dene the one parameter family of Hilbert spaces H

through the norm


|f|
2

= |R

f|
2
L
2 =

k
[f
k
[
2

.
The inner product can be obtained through polarization. This norm enables us to measure the reg-
ularity of the function f(t).
3
Let X
t
be a mean zero second order (i.e. with nite second moment)
process with continuous autocorrelation function. Dene the space H

:= L
2
((, P), H

(0, 1))
with (semi)norm
|X
t
|
2

= E|X
t
|
2
H
=

k
[
k
[
1
. (3.34)
Notice that the regularity of the stochastic process X
t
depends on the decay of the eigenvalues of
the integral operator 1 :=
_
1
0
R(t, s) ds.
As an example, consider the L
2
-regularity of Brownian motion. From Example 3.6.3 we know
that
k
k
2
. Consequently, from (3.34) we get that, in order for W
t
to be an element of the space
H

, we need that

k
[
k
[
2(1)
< +,
from which we obtain that < 1/2. This is consistent with the H older continuity of Brownian
motion from Theorem 3.4.6.
4
3.7 Discussion and Bibliography
The Ornstein-Uhlenbeck process was introduced by Ornstein and Uhlenbeck in 1930 as a model
for the velocity of a Brownian particle [93].
The kind of analysis presented in Section 3.3.3 was initiated by G.I. Taylor in [91]. The proof of
Bochners theorem 3.3.12 can be found in [50], where additional material on stationary processes
can be found. See also [46].
3
Think of R as being the inverse of the Laplacian with periodic boundary conditions. In this case H

coincides
with the standard fractional Sobolev space.
4
Notice, however, that Wieners theorem refers to a.s. H older continuity, whereas the calculation presented in this
section is about L
2
-continuity.
50
The spectral theorem for compact, self-adjoint operators which was needed in the proof of the
Karhunen-Lo eve theorem can be found in [81]. The Karhunen-Loeve expansion is also valid for
random elds. See [88] and the reference therein.
3.8 Exercises
1. Let Y
0
, Y
1
, . . . be a sequence of independent, identically distributed random variables and con-
sider the stochastic process X
n
= Y
n
.
(a) Show that X
n
is a strictly stationary process.
(b) Assume that EY
0
= < +and EY
2
0
= sigma
2
< +. Show that
lim
N+
E

1
N
N1

j=0
X
j

= 0.
(c) Let f be such that Ef
2
(Y
0
) < +. Show that
lim
N+
E

1
N
N1

j=0
f(X
j
) f(Y
0
)

= 0.
2. Let Z be a random variable and dene the stochastic process X
n
= Z, n = 0, 1, 2, . . . . Show
that X
n
is a strictly stationary process.
3. Let A
0
, A
1
, . . . A
m
and B
0
, B
1
, . . . B
m
be uncorrelated random variables with mean zero and
variances EA
2
i
=
2
i
, EB
2
i
=
2
i
, i = 1, . . . m. Let
0
,
1
, . . .
m
[0, ] be distinct frequen-
cies and dene, for n = 0, 1, 2, . . . , the stochastic process
X
n
=
m

k=0
_
A
k
cos(n
k
) + B
k
sin(n
k
)
_
.
Calculate the mean and the covariance of X
n
. Show that it is a weakly stationary process.
4. Let
n
: n = 0, 1, 2, . . . be uncorrelated random variables with E
n
= , E(
n
)
2
=

2
, n = 0, 1, 2, . . . . Let a
1
, a
2
, . . . be arbitrary real numbers and consider the stochastic
process
X
n
= a
1

n
+ a
2

n1
+ . . . a
m

nm+1
.
51
(a) Calculate the mean, variance and the covariance function of X
n
. Show that it is a weakly
stationary process.
(b) Set a
k
= 1/

m for k = 1, . . . m. Calculate the covariance function and study the cases


m = 1 and m +.
5. Let W(t) be a standard one dimensional Brownian motion. Calculate the following expecta-
tions.
(a) Ee
iW(t)
.
(b) Ee
i(W(t)+W(s))
, t, s, (0, +).
(c) E(

n
i=1
c
i
W(t
i
))
2
, where c
i
R, i = 1, . . . n and t
i
(0, +), i = 1, . . . n.
(d) Ee
_
i
_
P
n
i=1
c
i
W(t
i
)
_
, where c
i
R, i = 1, . . . n and t
i
(0, +), i = 1, . . . n.
6. Let W
t
be a standard one dimensional Brownian motion and dene
B
t
= W
t
tW
1
, t [0, 1].
(a) Show that B
t
is a Gaussian process with
EB
t
= 0, E(B
t
B
s
) = min(t, s) ts.
(b) Show that, for t [0, 1) an equivalent denition of B
t
is through the formula
B
t
= (1 t)W
_
t
1 t
_
.
(c) Calculate the distribution function of B
t
.
7. Let X
t
be a mean-zero second order stationary process with autocorrelation function
R(t) =
N

j=1

2
j

j
e

j
[t[
,
where
j
,
j

N
j=1
are positive real numbers.
(a) Calculate the spectral density and the correlaction time of this process.
52
(b) Show that the assumptions of Theorem 3.3.17 are satised and use the argument presented
in Section 3.3.3 (i.e. the Green-Kubo formula) to calculate the diffusion coefcient of the
process Z
t
=
_
t
0
X
s
ds.
(c) Under what assumptions on the coefcients
j
,
j

N
j=1
can you study the above questions
in the limit N +?
8. Prove Lemma 3.10.
9. Let a
1
, . . . a
n
and s
1
, . . . s
n
be positive real numbers. Calculate the mean and variance of the
random variable
X =
n

i=1
a
i
W(s
i
).
10. Let W(t) be the standard one-dimensional Brownian motion and let , s
1
, s
2
> 0. Calculate
(a) Ee
W(t)
.
(b) E
_
sin(W(s
1
)) sin(W(s
2
))
_
.
11. Let W
t
be a one dimensional Brownian motion and let , > 0 and dene
S
t
= e
t+W
t
.
(a) Calculate the mean and the variance of S
t
.
(b) Calculate the probability density function of S
t
.
12. Use Theorem 3.4.4 to prove Lemma 3.4.3.
13. Prove Theorem 3.4.7.
14. Use Lemma 3.4.8 to calculate the distribution function of the stationary Ornstein-Uhlenbeck
process.
15. Calculate the mean and the correlation function of the integral of a standard Brownian motion
Y
t
=
_
t
0
W
s
ds.
53
16. Show that the process
Y
t
=
_
t+1
t
(W
s
W
t
) ds, t R,
is second order stationary.
17. Let V
t
= e
t
W(e
2t
) be the stationary Ornstein-Uhlenbeck process. Give the denition and
study the main properties of the Ornstein-Uhlenbeck bridge.
18. The autocorrelation function of the velocity Y (t) a Brownian particle moving in a harmonic
potential V (x) =
1
2

2
0
x
2
is
R(t) = e
[t[
_
cos([t[)
1

sin([t[)
_
,
where is the friction coefcient and =
_

2
0

2
.
(a) Calculate the spectral density of Y (t).
(b) Calculate the mean square displacement E(X(t))
2
of the position of the Brownian particle
X(t) =
_
t
0
Y (s) ds. Study the limit t +.
19. Show the scaling property (3.25) of the fractional Brownian motion.
20. Use Theorem (3.4.4) to show that there does not exist a continuous modication of the Poisson
process.
21. Show that the correlation function of a process X
t
satisfying (3.26) is continuous in both t and
s.
22. Let X
t
be a stochastic process satisfying (3.26) and R(t, s) its correlation function. Show that
the integral operator 1 : L
2
[0, 1] L
2
[0, 1]
1f :=
_
1
0
R(t, s)f(s) ds (3.35)
is self-adjoint and nonnegative. Show that all of its eigenvalues are real and nonnegative. Show
that eigenfunctions corresponding to different eigenvalues are orthogonal.
54
23. Let H be a Hilbert space. An operator 1 : H H is said to be Hilbert-Schmidt if there exists
a complete orthonormal sequence
n

n=1
in H such that

n=1
|1e
n
|
2
< .
Let 1 : L
2
[0, 1] L
2
[0, 1] be the operator dened in (3.35) with R(t, s) being continuous both
in t and s. Show that it is a Hilbert-Schmidt operator.
24. Let X
t
a mean zero second order stationary process dened in the interval [0, T] with continuous
covariance R(t) and let
n

+
n=1
be the eigenvalues of the covariance operator. Show that

n=1

n
= T R(0).
25. Calculate the Karhunen-Loeve expansion for a second order stochastic process with correlation
function R(t, s) = ts.
26. Calculate the Karhunen-Loeve expansion of the Brownian bridge on [0, 1].
27. Let X
t
, t [0, T] be a second order process with continuous covariance and Karhunen-Lo eve
expansion
X
t
=

k=1

k
e
k
(t).
Dene the process
Y (t) = f(t)X
(t)
, t [0, S],
where f(t) is a continuous function and (t) a continuous, nondecreasing function with (0) =
0, (S) = T. Find the Karhunen-Lo eve expansion of Y (t), in an appropriate weighted L
2
space, in terms of the KL expansion of X
t
. Use this in order to calculate the KL expansion of
the Ornstein-Uhlenbeck process.
28. Calculate the Karhunen-Lo eve expansion of a centered Gaussian stochastic process with co-
variance function R(s, t) = cos(2(t s)).
29. Use the Karhunen-Loeve expansion to generate paths of the
(a) Brownian motion on [0, 1].
55
(b) Brownian bridge on [0, 1].
(c) Ornstein-Uhlenbeck on [0, 1].
Study computationally the convergence of the KL expansion for these processes. How many
terms do you need to keep in the KL expansion in order to calculate accurate statistics of these
processes?
56
Chapter 4
Markov Processes
4.1 Introduction
In this chapter we will study some of the basic properties of Markov stochastic processes. In
Section 4.2 we present various examples of Markov processes, in discrete and continuous time.
In Section 4.3 we give the precise denition of a Markov process. In Section 4.4 we derive the
Chapman-Kolmogorov equation, the fundamental equation in the theory of Markov processes. In
Section 4.5 we introduce the concept of the generator of a Markov process. In Section 4.6 we study
ergodic Markov processes. Discussion and bibliographical remarks are presented in Section 4.7
and exercises can be found in Section 4.8.
4.2 Examples
Roughly speaking, a Markov process is a stochastic process that retains no memory of where it has
been in the past: only the current state of a Markov process can inuence where it will go next. A
bit more precisely: a Markov process is a stochastic process for which, given the present, past and
future are statistically independent.
Perhaps the simplest example of a Markov process is that of a random walk in one dimension.
We dened the one dimensional random walk as the sum of independent, mean zero and variance
1 random variables
i
, i = 1, . . . :
X
N
=
N

n=1

n
, X
0
= 0.
57
Let i
1
, . . . i
2
, . . . be a sequence of integers. Then, for all integers n and m we have that
P(X
n+m
= i
n+m
[X
1
= i
1
, . . . X
n
= i
n
) = P(X
n+m
= i
n+m
[X
n
= i
n
). (4.1)
1
In words, the probability that the random walk will be at i
n+m
at time n + m depends only on its
current value (at time n) and not on how it got there.
The random walk is an example of a discrete time Markov chain:
Denition 4.2.1. A stochastic process S
n
; n N and state space is S = Z is called a discrete
time Markov chain provided that the Markov property (4.1) is satised.
Consider now a continuous-time stochastic process X
t
with state space S = Z and denote by
X
s
, s t the collection of values of the stochastic process up to time t. We will say that X
t
is
a Markov processes provided that
P(X
t+h
= i
t+h
[X
s
, s t) = P(X
t+h
= i
t+h
[X
t
= i
t
), (4.2)
for all h 0. A continuous-time, discrete state space Markov process is called a continuous-time
Markov chain.
Example 4.2.2. The Poisson process is a continuous-time Markov chain with
P(N
t+h
= j[N
t
= i) =
_
0 if j < i,
e
s
(s)
ji
(ji)!
, if j i.
Similarly, we can dene a continuous-time Markov process whose state space is R. In this
case, the above denitions become
P(X
t+h
[X
s
, s t) = P(X
t+h
[X
t
= x) (4.3)
for all Borel sets .
Example 4.2.3. The Brownian motion is a Markov process with conditional probability density
p(y, t[x, s) := p(W
t
= y[W
s
= x) =
1
_
2(t s)
exp
_

[x y[
2
2(t s)
_
. (4.4)
1
In fact, it is sufcient to take m = 1 in (4.1). See Exercise 1.
58
Example 4.2.4. The Ornstein-Uhlenbeck process V
t
= e
t
W(e
2t
) is a Markov process with con-
ditional probability density
p(y, t[x, s) := p(V
t
= y[V
s
= x) =
1
_
2(1 e
2(ts)
)
exp
_

[y xe
(ts)
[
2
2(1 e
2(ts)
)
_
. (4.5)
To prove (4.5) we use the formula for the distribution function of the Brownian motion to calculate,
for t > s,
P(V
t
y[V
s
= x) = P(e
t
W(e
2t
) y[e
s
W(e
2s
) = x)
= P(W(e
2t
) e
t
y[W(e
2s
) = e
s
x)
=
_
e
t
y

1
_
2(e
2t
e
2s
)
e

|zxe
s
|
2
2(e
2t
e
2s
)
dz
=
_
y

_
2e
2t
(1 e
2(ts)
)e

|e
t
xe
s
|
2
2(e
2t
(1e
2(ts)
)
d
=
_
y

1
_
2(1 e
2(ts)
)
e

|x|
2
2(1e
2(ts)
)
d.
Consequently, the transition probability density for the OU process is given by the formula
p(y, t[x, s) =

y
P(V
t
y[V
s
= x)
=
1
_
2(1 e
2(ts)
)
exp
_

[y xe
(ts)
[
2
2(1 e
2(ts)
)
_
.
Markov stochastic processes appear in a variety of applications in physics, chemistry, biology
and nance. In this and the next chapter we will develop various analytical tools for studying them.
In particular, we will see that we can obtain an equation for the transition probability
P(X
n+1
= i
n+1
[X
n
= i
n
), P(X
t+h
= i
t+h
[X
t
= i
t
), p(X
t+h
= y[X
t
= x), (4.6)
which will enable us to study the evolution of a Markov process. This equation will be called the
Chapman-Kolmogorov equation.
We will be mostly concerned with time-homogeneous Markov processes, i.e. processes for
which the conditional probabilities are invariant under time shifts. For time-homogeneous discrete-
time Markov chains we have
P(X
n+1
= j[X
n
= i) = P(X
1
= j[X
0
= i) =: p
ij
.
59
We will refer to the matrix P = p
ij
as the transition matrix. It is each to check that the
transition matrix is a stochastic matrix, i.e. it has nonnegative entries and

j
p
ij
= 1. Similarly,
we can dene the n-step transition matrix P
n
= p
ij
(n) as
p
ij
(n) = P(X
m+n
= j[X
m
= i).
We can study the evolution of a Markov chain through the Chapman-Kolmogorov equation:
p
ij
(m + n) =

k
p
ik
(m)p
kj
(n). (4.7)
Indeed, let
(n)
i
:= P(X
n
= i). The (possibly innite dimensional) vector
n
determines the state
of the Markov chain at time n. A simple consequence of the Chapman-Kolmogorov equation is
that we can write an evolution equation for the vector
(n)

(n)
=
(0)
P
n
, (4.8)
where P
n
denotes the nth power of the matrix P. Hence in order to calculate the state of the
Markov chain at time n all we need is the initial distribution
0
and the transition matrix P. Com-
ponentwise, the above equation can be written as

(n)
j
=

(0)
i

ij
(n).
Consider now a continuous time Markov chain with transition probability
p
ij
(s, t) = P(X
t
= j[X
s
= i), s t.
If the chain is homogeneous, then
p
ij
(s, t) = p
ij
(0, t s) for all i, j, s, t.
In particular,
p
ij
(t) = P(X
t
= j[X
0
= i).
The Chapman-Kolmogorov equation for a continuous time Markov chain is
dp
ij
dt
=

k
p
ik
(t)g
kj
, (4.9)
60
where the matrix Gis called the generator of the Markov chain. Equation (4.9) can also be written
in matrix notation:
dP
dt
= P
t
G.
The generator of the Markov chain is dened as
G = lim
h0
1
h
(P
h
I).
Let now
i
t
= P(X
t
= i). The vector
t
is the distribution of the Markov chain at time t. We can
study its evolution using the equation

t
=
0
P
t
.
Thus, as in the case if discrete time Markov chains, the evolution of a continuous time Markov
chain is completely determined by the initial distribution and and transition matrix.
Consider now the case a continuous time Markov process with continuous state space and
with continuous paths. As we have seen in Example 4.2.3 the Brownian motion is an example
of such a process. It is a standard result in the theory of partial differential equations that the
conditional probability density of the Brownian motion (4.4) is the fundamental solution of the
diffusion equation:
p
t
=
1
2

2
p
y
2
, lim
ts
p(y, t[x, s) = (y x). (4.10)
Similarly, the conditional distribution of the OU process satises the initial value problem
p
t
=
(yp)
y
+
1
2

2
p
y
2
, lim
ts
p(y, t[x, s) = (y x). (4.11)
The Brownian motion and the OU process are examples of a diffusion process. A diffusion pro-
cess is a continuous time Markov process with continuous paths. We will see in Chapter 5, that
the conditional probability density p(y, t[x, s) of a diffusion process satises the forward Kol-
mogorov or Fokker-Planck equation
p
t
=

y
(a(y, t)p) +
1
2

2
y
2
(b(y, t)p), lim
ts
p(y, t[x, s) = (y x). (4.12)
as well as the backward Kolmogorov equation

p
s
= a(x, s)
p
x
+
1
2
b(x, s)

2
p
x
2
, lim
ts
p(y, t[x, s) = (y x). (4.13)
for appropriate functions a(y, t), b(y, t). Hence, a diffusion process is determined uniquely from
these two functions.
61
4.3 Denition of a Markov Process
In Section 4.1 we gave the denition of Markov process whose time is either discrete or continuous,
and whose state space is the set of integers. We also gave several examples of Markov chains as
well as of processes whose state space is the real line. In this section we give the precise denition
of a Markov process with t T, a general index set and S = E, an arbitrary metric space. We will
use this formulation in the next section to derive the Chapman-Kolmogorov equation.
In order to state the denition of a continuous-time Markov process that takes values in a metric
space we need to introduce various new concepts. For the denition of a Markov process we need
to use the conditional expectation of the stochastic process conditioned on all past values. We can
encode all past information about a stochastic process into an appropriate collection of -algebras.
Our setting will be that we have a probability space (, T, P) and an ordered set T. Let X = X
t
()
be a stochastic process from the sample space (, T) to the state space (E, (), where E is a metric
space (we will usually take E to be either R or R
d
). Remember that the stochastic process is a
function of two variables, t T and .
We start with the denition of a algebra generated by a collection of sets.
Denition 4.3.1. Let / be a collection of subsets of . The smallest algebra on which
contains / is denoted by (/) and is called the algebra generated by /.
Denition 4.3.2. Let X
t
: E, t T. The smallest algebra (X
t
, t T), such that the
family of mappings X
t
, t T is a stochastic process with sample space (, (X
t
, t T)) and
state space (E, (), is called the algebra generated by X
t
, t T.
In other words, the algebra generated by X
t
is the smallest algebra such that X
t
is a
measurable function (random variable) with respect to it: the set
_
: X
t
() x
_
(X
t
, t T)
for all x R (we have assumed that E = R).
Denition 4.3.3. A ltration on (, T) is a nondecreasing family T
t
, t T of subalgebras
of T: T
s
T
t
T for s t.
We set T

= (
tT
T
t
). The ltration generated by X
t
, where X
t
is a stochastic process, is
T
X
t
:= (X
s
; s t) .
62
Denition 4.3.4. A stochastic process X
t
; t T is adapted to the ltration T
t
:= T
t
, t T
if for all t T, X
t
is an T
t
measurable random variable.
Denition 4.3.5. Let X
t
be a stochastic process dened on a probability space (, T, ) with
values in E and let T
X
t
be the ltration generated by X
t
; t T. Then X
t
; t T is a Markov
process if
P(X
t
[T
X
s
) = P(X
t
[X
s
) (4.14)
for all t, s T with t s, and B(E).
Remark 4.3.6. The ltration T
X
t
is generated by events of the form[X
s
1
B
1
, X
s
2
B
2
, . . . X
s
n

B
n
, with 0 s
1
< s
2
< < s
n
s and B
i
B(E). The denition of a Markov process is
thus equivalent to the hierarchy of equations
P(X
t
[X
t
1
, X
t
2
, . . . X
t
n
) = P(X
t
[X
t
n
) a.s.
for n 1, 0 t
1
< t
2
< < t
n
t and B(E).
Roughly speaking, the statistics of X
t
for t s are completely determined once X
s
is known;
information about X
t
for t < s is superuous. In other words: a Markov process has no mem-
ory. More precisely: when a Markov process is conditioned on the present state, then there is no
memory of the past. The past and future of a Markov process are statistically independent when
the present is known.
Remark 4.3.7. A non-Markovian process X
t
can be described through a Markovian one Y
t
by
enlarging the state space: the additional variables that we introduce account for the memory in
the X
t
. This Markovianization trick is very useful since there exist many analytical tools for
analyzing Markovian processes.
Example 4.3.8. The velocity of a Brownian particle is modeled by the stationary Ornstein-Uhlenbeck
process Y
t
= e
t
W(e
2t
). The particle position is given by the integral of the OU process (we take
X
0
= 0)
X
t
=
_
t
0
Y
s
ds.
The particle position depends on the past of the OU process and, consequently, is not a Markov
process. However, the joint position-velocity process X
t
, Y
t
is. Its transition probability density
63
p(x, y, t[x
0
, y
0
) satises the forward Kolmogorov equation
p
t
= p
p
x
+

y
(yp) +
1
2

2
p
y
2
.
4.4 The Chapman-Kolmogorov Equation
With a Markov process X
t
we can associate a function P : T T E B(E) R
+
dened
through the relation
P
_
X
t
[T
X
s

= P(s, t, X
s
, ),
for all t, s T with t s and all B(E). Assume that X
s
= x. Since P
_
X
t
[T
X
s

=
P[X
t
[X
s
] we can write
P(, t[x, s) = P[X
t
[X
s
= x] .
The transition function P(t, [x, s) is (for xed t, x s) a probability measure on E with P(t, E[x, s) =
1; it is B(E)measurable in x (for xed t, s, ) and satises the ChapmanKolmogorov equation
P(, t[x, s) =
_
E
P(, t[y, u)P(dy, u[x, s). (4.15)
for all x E, B(E) and s, u, t T with s u t. The derivation of the Chapman-
Kolmogorov equation is based on the assumption of Markovianity and on properties of the con-
ditional probability. Let (, T, ) be a probability space, X a random variable from (, T, ) to
(E, () and let T
1
T
2
T. Then (see Theorem 2.4.1)
E(E(X[T
2
)[T
1
) = E(E(X[T
1
)[T
2
) = E(X[T
1
). (4.16)
Given ( T we dene the function P
X
(B[() = P(X B[() for B T. Assume that f is such
that E(f(X)) < . Then
E(f(X)[() =
_
R
f(x)P
X
(dx[(). (4.17)
64
Now we use the Markov property, together with equations (4.16) and (4.17) and the fact that
s < u T
X
s
T
X
u
to calculate:
P(, t[x, s) := P(X
t
[X
s
= x) = P(X
t
[T
X
s
)
= E(I

(X
t
)[T
X
s
) = E(E(I

(X
t
)[T
X
s
)[T
X
u
)
= E(E(I

(X
t
)[T
X
u
)[T
X
s
) = E(P(X
t
[X
u
)[T
X
s
)
= E(P(X
t
[X
u
= y)[X
s
= x)
=
_
R
P(, t[X
u
= y)P(dy, u[X
s
= x)
=:
_
R
P(, t[y, u)P(dy, u[x, s).
I

() denotes the indicator function of the set . We have also set E = R. The CK equation is
an integral equation and is the fundamental equation in the theory of Markov processes. Under
additional assumptions we will derive from it the Fokker-Planck PDE, which is the fundamental
equation in the theory of diffusion processes, and will be the main object of study in this course.
Denition 4.4.1. A Markov process is homogeneous if
P(t, [X
s
= x) := P(s, t, x, ) = P(0, t s, x, ).
We set P(0, t, , ) = P(t, , ). The ChapmanKolmogorov (CK) equation becomes
P(t + s, x, ) =
_
E
P(s, x, dz)P(t, z, ). (4.18)
Let X
t
be a homogeneous Markov process and assume that the initial distribution of X
t
is
given by the probability measure () = P(X
0
) (for deterministic initial conditionsX
0
= x
we have that () = I

(x) ). The transition function P(x, t, ) and the initial distribution


determine the nite dimensional distributions of X by
P(X
0

1
, X(t
1
)
1
, . . . , X
t
n

n
)
=
_

0
_

1
. . .
_

n1
P(t
n
t
n1
, y
n1
,
n
)P(t
n1
t
n2
, y
n2
, dy
n1
)
P(t
1
, y
0
, dy
1
)(dy
0
). (4.19)
Theorem 4.4.2. ([21, Sec. 4.1]) Let P(t, x, ) satisfy (4.18) and assume that (E, ) is a complete
separable metric space. Then there exists a Markov process X in E whose nite-dimensional
distributions are uniquely determined by (4.19).
65
Let X
t
be a homogeneous Markov process with initial distribution () = P(X
0
) and
transition function P(x, t, ). We can calculate the probability of nding X
t
in a set at time t:
P(X
t
) =
_
E
P(x, t, )(dx).
Thus, the initial distribution and the transition function are sufcient to characterize a homoge-
neous Markov process. Notice that they do not provide us with any information about the actual
paths of the Markov process. The transition probability P(, t[x, s) is a probability measure. As-
sume that it has a density for all t > s:
P(, t[x, s) =
_

p(y, t[x, s) dy.


Clearly, for t = s we have P(, s[x, s) = I

(x). The Chapman-Kolmogorov equation becomes:


_

p(y, t[x, s) dy =
_
R
_

p(y, t[z, u)p(z, u[x, s) dzdy,


and, since B(R) is arbitrary, we obtain the equation
p(y, t[x, s) =
_
R
p(y, t[z, u)p(z, u[x, s) dz. (4.20)
The transition probability density is a function of 4 arguments: the initial position and time x, s
and the nal position and time y, t.
In words, the CK equation tells us that, for a Markov process, the transition from x, s to y, t
can be done in two steps: rst the system moves from x to z at some intermediate time u. Then it
moves from z to y at time t. In order to calculate the probability for the transition from (x, s) to
(y, t) we need to sum (integrate) the transitions from all possible intermediary states z. The above
description suggests that a Markov process can be described through a semigroup of operators,
i.e. a one-parameter family of linear operators with the properties
P
0
= I, P
t+s
= P
t
P
s
t, s 0.
Indeed, let P(t, x, dy) be the transition function of a homogeneous Markov process. It satises
the CK equation (4.18):
P(t + s, x, ) =
_
E
P(s, x, dz)P(t, z, ).
66
Let X := C
b
(E) and dene the operator
(P
t
f)(x) := E(f(X
t
)[X
0
= x) =
_
E
f(y)P(t, x, dy).
This is a linear operator with
(P
0
f)(x) = E(f(X
0
)[X
0
= x) = f(x) P
0
= I.
Furthermore:
(P
t+s
f)(x) =
_
f(y)P(t + s, x, dy)
=
_ _
f(y)P(s, z, dy)P(t, x, dz)
=
_ __
f(y)P(s, z, dy)
_
P(t, x, dz)
=
_
(P
s
f)(z)P(t, x, dz)
= (P
t
P
s
f)(x).
Consequently:
P
t+s
= P
t
P
s
.
4.5 The Generator of a Markov Processes
Let (E, ) be a metric space and let X
t
be an E-valued homogeneous Markov process. Dene
the one parameter family of operators P
t
through
P
t
f(x) =
_
f(y)P(t, x, dy) = E[f(X
t
)[X
0
= x]
for all f(x) C
b
(E) (continuous bounded functions on E). Assume for simplicity that P
t
:
C
b
(E) C
b
(E). Then the one-parameter family of operators P
t
forms a semigroup of operators
on C
b
(E). We dene by T(L) the set of all f C
b
(E) such that the strong limit
Lf = lim
t0
P
t
f f
t
,
exists.
Denition 4.5.1. The operator L : T(L) C
b
(E) is called the innitesimal generator of the
operator semigroup P
t
.
67
Denition 4.5.2. The operator L : C
b
(E) C
b
(E) dened above is called the generator of the
Markov process X
t
; t 0.
The semigroup property and the denition of the generator of a semigroup imply that, formally
at least, we can write:
P
t
= exp(Lt).
Consider the function u(x, t) := (P
t
f)(x). We calculate its time derivative:
u
t
=
d
dt
(P
t
f) =
d
dt
_
e
/t
f
_
= L
_
e
/t
f
_
= LP
t
f = Lu.
Furthermore, u(x, 0) = P
0
f(x) = f(x). Consequently, u(x, t) satises the initial value problem
u
t
= Lu, u(x, 0) = f(x). (4.21)
When the semigroup P
t
is the transition semigroup of a Markov process X
t
, then equation (4.21)
is called the backward Kolmogorov equation. It governs the evolution of an observable
u(x, t) = E(f(X
t
)[X
0
= x).
Thus, given the generator of a Markov process L, we can calculate all the statistics of our process
by solving the backward Kolmogorov equation. In the case where the Markov process is the
solution of a stochastic differential equation, then the generator is a second order elliptic operator
and the backward Kolmogorov equation becomes an initial value problem for a parabolic PDE.
The space C
b
(E) is natural in a probabilistic context, but other Banach spaces often arise
in applications; in particular when there is a measure on E, the spaces L
p
(E; ) sometimes
arise. We will quite often use the space L
2
(E; ), where will is the invariant measure of
our Markov process. The generator is frequently taken as the starting point for the denition
of a homogeneous Markov process. Conversely, let P
t
be a contraction semigroup (Let X be
a Banach space and T : X X a bounded operator. Then T is a contraction provided that
|Tf|
X
|f|
X
f X), with T(P
t
) C
b
(E), closed. Then, under mild technical hypotheses,
there is an Evalued homogeneous Markov process X
t
associated with P
t
dened through
E[f(X(t)[T
X
s
)] = P
ts
f(X(s))
for all t, s T with t s and f T(P
t
).
68
Example 4.5.3. The Poisson process is a homogeneous Markov process.
Example 4.5.4. The one dimensional Brownian motion is a homogeneous Markov process. The
transition function is the Gaussian dened in the example in Lecture 2:
P(t, x, dy) =
t,x
(y)dy,
t,x
(y) =
1

2t
exp
_

[x y[
2
2t
_
.
The semigroup associated to the standard Brownian motion is the heat semigroup P
t
= e
t
2
d
2
dx
2
. The
generator of this Markov process is
1
2
d
2
dx
2
.
Notice that the transition probability density
t,x
of the one dimensional Brownian motion is
the fundamental solution (Greens function) of the heat (diffusion) PDE
u
t
=
1
2

2
u
x
2
.
4.5.1 The Adjoint Semigroup
The semigroup P
t
acts on bounded measurable functions. We can also dene the adjoint semigroup
P

t
which acts on probability measures:
P

t
() =
_
R
P(X
t
[X
0
= x) d(x) =
_
R
p(t, x, ) d(x).
The image of a probability measure under P

t
is again a probability measure. The operators P
t
and P

t
are adjoint in the L
2
-sense:
_
R
P
t
f(x) d(x) =
_
R
f(x) d(P

t
)(x). (4.22)
We can, formally at least, write
P

t
= exp(L

t),
where L

is the L
2
-adjoint of the generator of the process:
_
Lfhdx =
_
fL

hdx.
Let
t
:= P

t
. This is the law of the Markov process and is the initial distribution. An argument
similar to the one used in the derivation of the backward Kolmogorov equation (4.21) enables us
to obtain an equation for the evolution of
t
:

t
t
= L

t
,
0
= .
69
Assuming that
t
= (y, t) dy, =
0
(y) dy this equation becomes:

t
= L

, (y, 0) =
0
(y). (4.23)
This is the forward Kolmogorov or Fokker-Planck equation. When the initial conditions are
deterministic, X
0
= x, the initial condition becomes
0
= (y x). Given the initial distribution
and the generator of the Markov process X
t
, we can calculate the transition probability density by
solving the Forward Kolmogorov equation. We can then calculate all statistical quantities of this
process through the formula
E(f(X
t
)[X
0
= x) =
_
f(y)(t, y; x) dy.
We will derive rigorously the backward and forward Kolmogorov equations for Markov processes
that are dened as solutions of stochastic differential equations later on.
We can study the evolution of a Markov process in two different ways: Either through the
evolution of observables (Heisenberg/Koopman)
(P
t
f)
t
= L(P
t
f),
or through the evolution of states (Schr odinger/Frobenious-Perron)
(P

t
)
t
= L

(P

t
).
We can also study Markov processes at the level of trajectories. We will do this after we dene the
concept of a stochastic differential equation.
4.6 Ergodic Markov processes
A very important concept in the study of limit theorems for stochastic processes is that of er-
godicity. This concept, in the context of Markov processes, provides us with information on the
longtime behavior of a Markov semigroup.
Denition 4.6.1. A Markov process is called ergodic if the equation
P
t
g = g, g C
b
(E) t 0
has only constant solutions.
70
Roughly speaking, ergodicity corresponds to the case where the semigroup P
t
is such that P
t
I
has only constants in its null space, or, equivalently, to the case where the generator L has only
constants in its null space. This follows from the denition of the generator of a Markov process.
Under some additional compactness assumptions, an ergodic Markov process has an invariant
measure with the property that, in the case T = R
+
,
lim
t+
1
t
_
t
0
g(X
s
) ds = Eg(x),
where E denotes the expectation with respect to . This is a physicists denition of an ergodic
process: time averages equal phase space averages.
Using the adjoint semigroup we can dene an invariant measure as the solution of the equation
P

t
= .
If this measure is unique, then the Markov process is ergodic. Using this, we can obtain an equation
for the invariant measure in terms of the adjoint of the generator L

, which is the generator of the


semigroup P

t
. Indeed, from the denition of the generator of a semigroup and the denition of an
invariant measure, we conclude that a measure is invariant if and only if
L

= 0
in some appropriate generalized sense ((L

, f) = 0 for every bounded measurable function).


Assume that (dx) = (x) dx. Then the invariant density satises the stationary Fokker-Planck
equation
L

= 0.
The invariant measure (distribution) governs the long-time dynamics of the Markov process.
4.6.1 Stationary Markov Processes
If X
0
is distributed according to , then so is X
t
for all t > 0. The resulting stochastic process,
with X
0
distributed in this way, is stationary . In this case the transition probability density (the
solution of the Fokker-Planck equation) is independent of time: (x, t) = (x). Consequently, the
statistics of the Markov process is independent of time.
71
Example 4.6.2. Consider the one-dimensional Brownian motion. The generator of this Markov
process is
L =
1
2
d
2
dx
2
.
The stationary Fokker-Planck equation becomes
d
2

dx
2
= 0, (4.24)
together with the normalization and non-negativity conditions
0,
_
R
(x) dx = 1. (4.25)
There are no solutions to Equation (4.24), subject to the constraints (4.25).
2
Thus, the one dimen-
sional Brownian motion is not an ergodic process.
Example 4.6.3. Consider a one-dimensional Brownian motion on [0, 1], with periodic boundary
conditions. The generator of this Markov process Lis the differential operator L =
1
2
d
2
dx
2
, equipped
with periodic boundary conditions on [0, 1]. This operator is self-adjoint. The null space of both
L and L

comprises constant functions on [0, 1]. Both the backward Kolmogorov and the Fokker-
Planck equation reduce to the heat equation

t
=
1
2

x
2
with periodic boundary conditions in [0, 1]. Fourier analysis shows that the solution converges to
a constant at an exponential rate. See Exercise 6.
Example 4.6.4. The one dimensional Ornstein-Uhlenbeck (OU) process is a Markov process
with generator
L = x
d
dx
+ D
d
2
dx
2
.
The null space of L comprises constants in x. Hence, it is an ergodic Markov process. In order to
calculate the invariant measure we need to solve the stationary FokkerPlanck equation:
L

= 0, 0, ||
L
1
(R)
= 1. (4.26)
2
The general solution to Equation (4.25) is (x) = Ax + B for arbitrary constants A and B. This function is not
normalizable, i.e. there do not exist constants A and B so that
_
R
rho(x) dx = 1.
72
Let us calculate the L
2
-adjoint of L. Assuming that f, h decay sufciently fast at innity, we have:
_
R
Lfhdx =
_
R
_
(x
x
f)h + (D
2
x
f)h

dx
=
_
R
_
f
x
(xh) + f(D
2
x
h)

dx =:
_
R
fL

hdx,
where
L

h :=
d
dx
(axh) + D
d
2
h
dx
2
.
We can calculate the invariant distribution by solving equation (4.26). The invariant measure of
this process is the Gaussian measure
(dx) =
_

2D
exp
_


2D
x
2
_
dx.
If the initial condition of the OU process is distributed according to the invariant measure, then
the OU process is a stationary Gaussian process.
Let X
t
be the 1d OU process and let X
0
^(0, D/). Then X
t
is a mean zero, Gaussian
second order stationary process on [0, ) with correlation function
R(t) =
D

e
[t[
and spectral density
f(x) =
D

1
x
2
+
2
.
Furthermore, the OU process is the only real-valued mean zero Gaussian second-order stationary
Markov process dened on R.
4.7 Discussion and Bibliography
The study of operator semigroups started in the late 40s independently by Hille and Yosida. Semi-
group theory was developed in the 50s and 60s by Feller, Dynkin and others, mostly in connection
to the theory of Markov processes. Necessary and sufcient conditions for an operator L to be the
generator of a (contraction) semigroup are given by the Hille-Yosida theorem [22, Ch. 7].
73
4.8 Exercises
1. Let X
n
be a stochastic process with state space S = Z. Show that it is a Markov process if
and only if for all n
P(X
n+1
= i
n+1
[X
1
= i
1
, . . . X
n
= i
n
) = P(X
n+1
= i
n+1
[X
n
= i
n
).
2. Show that (4.4) is the solution of initial value problem (4.10) as well as of the nal value
problem

p
s
=
1
2

2
p
x
2
, lim
st
p(y, t[x, s) = (y x).
3. Use (4.5) to show that the forward and backward Kolmogorov equations for the OU process are
p
t
=

y
(yp) +
1
2

2
p
y
2
and

p
s
= x
p
x
+
1
2

2
p
x
2
.
4. Let W(t) be a standard one dimensional Brownian motion, let Y (t) = W(t) with > 0 and
consider the process
X(t) =
_
t
0
Y (s) ds.
Show that the joint process X(t), Y (t) is Markovian and write down the generator of the
process.
5. Let Y (t) = e
t
W(e
2t
) be the stationary Ornstein-Uhlenbeck process and consider the process
X(t) =
_
t
0
Y (s) ds.
Show that the joint process X(t), Y (t) is Markovian and write down the generator of the
process.
6. Consider a one-dimensional Brownian motion on [0, 1], with periodic boundary conditions. The
generator of this Markov process Lis the differential operator L =
1
2
d
2
dx
2
, equipped with periodic
boundary conditions on [0, 1]. Show that this operator is self-adjoint. Show that the null space
of both L and L

comprises constant functions on [0, 1]. Conclude that this process is ergodic.
Solve the corresponding Fokker-Planck equation for arbitrary initial conditions
0
(x) . Show
that the solution converges to a constant at an exponential rate. .
74
7. (a) Let X, Y be mean zero Gaussian random variables with EX
2
=
2
X
, EY
2
=
2
Y
and
correlation coefcient (the correlation coefcient is =
E(XY )

Y
). Show that
E(X[Y ) =

X

Y
Y.
(b) Let X
t
be a mean zero stationary Gaussian process with autocorrelation function R(t).
Use the previous result to show that
E[X
t+s
[X
s
] =
R(t)
R(0)
X(s), s, t 0.
(c) Use the previous result to show that the only stationary Gaussian Markov process with
continuous autocorrelation function is the stationary OU process.
8. Show that a Gaussian process X
t
is a Markov process if and only if
E(X
t
n
[X
t
1
= x
1
, . . . X
t
n1
= x
n1
) = E(X
t
n
[X
t
n1
= x
n1
).
75
76
Chapter 5
Diffusion Processes
5.1 Introduction
In this chapter we study a particular class of Markov processes, namely Markov processes with
continuous paths. These processes are called diffusion processes and they appear in many appli-
cations in physics, chemistry, biology and nance.
In Section 5.2 we give the denition of a diffusion process. In section 5.3 we derive the forward
and backward Kolmogorov equations for one-dimensional diffusion processes. In Section 5.4 we
present the forward and backward Kolmogorov equations in arbitrary dimensions. The connec-
tion between diffusion processes and stochastic differential equations is presented in Section 5.5.
Discussion and bibliographical remarks are included in Section 5.7. Exercises can be found in
Section 5.8.
5.2 Denition of a Diffusion Process
A Markov process consists of three parts: a drift (deterministic), a random process and a jump
process. A diffusion process is a Markov process that has continuous sample paths (trajectories).
Thus, it is a Markov process with no jumps. A diffusion process can be dened by specifying its
rst two moments:
Denition 5.2.1. A Markov process X
t
with transition function P(, t[x, s) is called a diffusion
process if the following conditions are satised.
77
i. (Continuity). For every x and every > 0
_
[xy[>
P(dy, t[x, s) = o(t s) (5.1)
uniformly over s < t.
ii. (Denition of drift coefcient). There exists a function a(x, s) such that for every x and every
> 0
_
[yx[
(y x)P(dy, t[x, s) = a(x, s)(t s) + o(t s). (5.2)
uniformly over s < t.
iii. (Denition of diffusion coefcient). There exists a function b(x, s) such that for every x and
every > 0
_
[yx[
(y x)
2
P(dy, t[x, s) = b(x, s)(t s) + o(t s). (5.3)
uniformly over s < t.
Remark 5.2.2. In Denition 5.2.1 we had to truncate the domain of integration since we didnt
know whether the rst and second moments exist. If we assume that there exists a > 0 such that
lim
ts
1
t s
_
R
d
[y x[
2+
P(dy, t[x, s) = 0, (5.4)
then we can extend the integration over the whole R
d
and use expectations in the denition of the
drift and the diffusion coefcient. Indeed, ,let k = 0, 1, 2 and notice that
_
[yx[>
[y x[
k
P(dy, t[x, s)
=
_
[yx[>
[y x[
2+
[y x[
k(2+)
P(dy, t[x, s)

2+k
_
[yx[>
[y x[
2+
P(dy, t[x, s)

2+k
_
R
d
[y x[
2+
P(dy, t[x, s).
Using this estimate together with (5.4) we conclude that:
lim
ts
1
t s
_
[yx[>
[y x[
k
P(dy, t[x, s) = 0, k = 0, 1, 2.
78
This implies that assumption (5.4) is sufcient for the sample paths to be continuous (k = 0) and
for the replacement of the truncated integrals in (10.1) and (5.3) by integrals over R (k = 1 and
k = 2, respectively). The denitions of the drift and diffusion coefcients become:
lim
ts
E
_
X
t
X
s
t s

X
s
= x
_
= a(x, s) (5.5)
and
lim
ts
E
_
[X
t
X
s
[
2
t s

X
s
= x
_
= b(x, s) (5.6)
5.3 The Backward and Forward Kolmogorov Equations
In this section we show that a diffusion process is completely determined by its rst two moments.
In particular, we will obtain partial differential equations that govern the evolution of the condi-
tional expectation of an arbitrary function of a diffusion process X
t
, u(x, s) = E(f(X
t
)[X
s
= x),
as well as of the transition probability density p(y, t[x, s). These are the backward and forward
Kolmogorov equations.
In this section we shall derive the backward and forward Kolmogorov equations for one-
dimensional diffusion processes. The extension to multidimensional diffusion processes is pre-
sented in Section 5.4.
5.3.1 The Backward Kolmogorov Equation
Theorem 5.3.1. (Kolmogorov) Let f(x) C
b
(R) and let
u(x, s) := E(f(X
t
)[X
s
= x) =
_
f(y)P(dy, t[x, s).
Assume furthermore that the functions a(x, s), b(x, s) are continuous in both x and s. Then
u(x, s) C
2,1
(R R
+
) and it solves the nal value problem

u
s
= a(x, s)
u
x
+
1
2
b(x, s)

2
u
x
2
, lim
st
u(s, x) = f(x). (5.7)
Proof. First we notice that, the continuity assumption (5.1), together with the fact that the function
79
f(x) is bounded imply that
u(x, s) =
_
R
f(y) P(dy, t[x, s)
=
_
[yx[
f(y)P(dy, t[x, s) +
_
[yx[>
f(y)P(dy, t[x, s)

_
[yx[
f(y)P(dy, t[x, s) +|f|
L

_
[yx[>
P(dy, t[x, s)
=
_
[yx[
f(y)P(dy, t[x, s) + o(t s).
We add and subtract the nal condition f(x) and use the previous calculation to obtain:
u(x, s) =
_
R
f(y)P(dy, t[x, s) = f(x) +
_
R
(f(y) f(x))P(dy, t[x, s)
= f(x) +
_
[yx[
(f(y) f(x))P(dy, t[x, s) +
_
[yx[>
(f(y) f(x))P(dy, t[x, s)
= f(x) +
_
[yx[
(f(y) f(x))P(dy, t[x, s) + o(t s).
Now the nal condition follows from the fact that f(x) C
b
(R) and the arbitrariness of .
Now we show that u(s, x) solves the backward Kolmogorov equation. We use the Chapman-
Kolmogorov equation (4.15) to obtain
u(x, ) =
_
R
f(z)P(dz, t[x, ) (5.8)
=
_
R
_
R
f(z)P(dz, t[y, )P(dy, [x, )
=
_
R
u(y, )P(dy, [x, ). (5.9)
The Taylor series expansion of the function u(x, s) gives
u(z, ) u(x, ) =
u(x, )
x
(z x) +
1
2

2
u(x, )
x
2
(z x)
2
(1 +

), [z x[ , (5.10)
where

= sup
,[zx[

2
u(x, )
x
2


2
u(z, )
x
2

.
Notice that, since u(x, s) is twice continuously differentiable in x, lim
0

= 0.
80
We combine now (5.9) with (5.10) to calculate
u(x, s) u(x, s + h)
h
=
1
h
__
R
P(dy, s + h[x, s)u(y, s + h) u(x, s + h)
_
=
1
h
_
R
P(dy, s + h[x, s)(u(y, s + h) u(x, s + h))
=
1
h
_
[xy[<
P(dy, s + h[x, s)(u(y, s + h) u(x, s)) + o(1)
=
u
x
(x, s + h)
1
h
_
[xy[<
(y x)P(dy, s + h[x, s)
+
1
2

2
u
x
2
(x, s + h)
1
h
_
[xy[<
(y x)
2
P(dy, s + h[x, s)(1 +

) + o(1)
= a(x, s)
u
x
(x, s + h) +
1
2
b(x, s)

2
u
x
2
(x, s + h)(1 +

) + o(1).
Equation (5.7) follows by taking the limits 0, h 0.
Assume now that the transition function has a density p(y, t[x, s). In this case the formula for
u(x, s) becomes
u(x, s) =
_
R
f(y)p(y, t[x, s) dy.
Substituting this in the backward Kolmogorov equation we obtain
_
R
f(y)
_
p(y, t[x, s)
s
+/
s,x
p(y, t[x, s)
_
= 0 (5.11)
where
/
s,x
:= a(x, s)

x
+
1
2
b(x, s)

2
x
2
.
Since (5.11) is valid for arbitrary functions f(y), we obtain a partial differential equations for the
transition probability density:

p(y, t[x, s)
s
= a(x, s)
p(y, t[x, s)
x
+
1
2
b(x, s)

2
p(y, t[x, s)
x
2
. (5.12)
Notice that the variation is with respect to the backward variables x, s. We will obtain an equa-
tion with respect to the forward variables y, t in the next section.
5.3.2 The Forward Kolmogorov Equation
In this section we will obtain the forward Kolmogorov equation. In the physics literature is called
the Fokker-Planck equation. We assume that the transition function has a density with respect to
81
Lebesgue measure.
P(, t[x, s) =
_

p(y, t[x, s) dy.


Theorem 5.3.2. (Kolmogorov) Assume that conditions (5.1), (10.1), (5.3) are satised and that
p(y, t[, ), a(y, t), b(y, t) C
2,1
(R R
+
). Then the transition probability density satises the
equation
p
t
=

y
(a(t, y)p) +
1
2

2
y
2
(b(t, y)p) , lim
ts
p(t, y[x, s) = (x y). (5.13)
Proof. Fix a function f(y) C
2
0
(R). An argument similar to the one used in the proof of the
backward Kolmogorov equation gives
lim
h0
1
h
__
f(y)p(y, s + h[x, s) ds f(x)
_
= a(x, s)f
x
(x) +
1
2
b(x, s)f
xx
(x), (5.14)
where subscripts denote differentiation with respect to x. On the other hand
_
f(y)

t
p(y, t[x, s) dy =

t
_
f(y)p(y, t[x, s) dy
= lim
h0
1
h
_
(p(y, t + h[x, s) p(y, t[x, s)) f(y) dy
= lim
h0
1
h
__
p(y, t + h[x, s)f(y) dy
_
p(z, t[s, x)f(z) dz
_
= lim
h0
1
h
__ _
p(y, t + s[z, t)p(z, t[x, s)f(y) dydz
_
p(z, t[s, x)f(z) dz
_
= lim
h0
1
h
__
p(z, t[x, s)
__
p(y, t + h[z, t)f(y) dy f(z)
__
dz
=
_
p(z, t[x, s)
_
a(z, t)f
z
(z) +
1
2
b(z)f
zz
(z)
_
dz
=
_
_


z
(a(z)p(z, t[x, s)) +
1
2

2
z
2
(b(z)p(z, t[x, s)
_
f(z) dz.
In the above calculation used the Chapman-Kolmogorov equation. We have also performed two
integrations by parts and used the fact that, since the test function f has compact support, the
boundary terms vanish.
Since the above equation is valid for every test function f(y), the forward Kolmogorov equation
follows.
Assume now that initial distribution of X
t
is
0
(x) and set s = 0 (the initial time) in (5.13).
Dene
p(y, t) :=
_
p(y, t[x, 0)
0
(x) dx. (5.15)
82
We multiply the forward Kolmogorov equation (5.13) by
0
(x) and integrate with respect to x to
obtain the equation
p(y, t)
t
=

y
(a(y, t)p(y, t)) +
1
2

2
y
2
(b(y, t)p(t, y)) , (5.16)
together with the initial condition
p(y, 0) =
0
(y). (5.17)
The solution of equation (5.16), provides us with the probability that the diffusion process X
t
,
which initially was distributed according to the probability density
0
(x), is equal to y at time t.
Alternatively, we can think of the solution to (5.13) as the Greens function for the PDE (5.16).
Using (5.16) we can calculate the expectation of an arbitrary function of the diffusion process X
t
:
E(f(X
t
)) =
_ _
f(y)p(y, t[x, 0)p(x, 0) dxdy
=
_
f(y)p(y, t) dy,
where p(y, t) is the solution of (5.16). Quite often we need to calculate joint probability densities.
For, example the probability that X
t
1
= x
1
and X
t
2
= x
2
. From the properties of conditional
expectation we have that
p(x
1
, t
1
, x
2
, t
2
) = P(X
t
1
= x
1
, X
t
2
= x
2
)
= P(X
t
1
= x
1
[X
t
2
= x
2
)P(X
t
2
= x
2
)
= p(x
1
, t
1
[x
2
t
2
)p(x
2
, t
2
).
Using the joint probability density we can calculate the statistics of a function of the diffusion
process X
t
at times t and s:
E(f(X
t
, X
s
)) =
_ _
f(y, x)p(y, t[x, s)p(x, s) dxdy. (5.18)
The autocorrelation function at time t and s is given by
E(X
t
X
s
) =
_ _
yxp(y, t[x, s)p(x, s) dxdy.
In particular,
E(X
t
X
0
) =
_ _
yxp(y, t[x, 0)p(x, 0) dxdy.
83
5.4 Multidimensional Diffusion Processes
Let X
t
be a diffusion process in R
d
. The drift and diffusion coefcients of a diffusion process in
R
d
are dened as:
lim
ts
1
t s
_
[yx[<
(y x)P(dy, t[x, s) = a(x, s)
and
lim
ts
1
t s
_
[yx[<
(y x) (y x)P(dy, t[x, s) = b(x, s).
The drift coefcient a(x, s) is a d-dimensional vector eld and the diffusion coefcient b(x, s) is a
d d symmetric matrix (second order tensor). The generator of a d dimensional diffusion process
is
L = a(x, s) +
1
2
b(x, s) :
=
d

j=1
a
j
(x, s)

x
j
+
1
2
d

i,j=1
b
ij
(x, s)

2
x
2
j
.
Exercise 5.4.1. Derive rigorously the forward and backward Kolmogorov equations in arbitrary
dimensions.
Assuming that the rst and second moments of the multidimensional diffusion process exist,
we can write the formulas for the drift vector and diffusion matrix as
lim
ts
E
_
X
t
X
s
t s

X
s
= x
_
= a(x, s) (5.19)
and
lim
ts
E
_
(X
t
X
s
) (X
t
X
s
)
t s

X
s
= x
_
= b(x, s) (5.20)
Notice that from the above denition it follows that the diffusion matrix is symmetric and nonneg-
ative denite.
5.5 Connection with Stochastic Differential Equations
Notice also that the continuity condition can be written in the form
P([X
t
X
s
[ [X
s
= x) = o(t s).
84
Now it becomes clear that this condition implies that the probability of large changes in X
t
over
short time intervals is small. Notice, on the other hand, that the above condition implies that the
sample paths of a diffusion process are not differentiable: if they where, then the right hand side
of the above equation would have to be 0 when t s 1. The sample paths of a diffusion process
have the regularity of Brownian paths. A Markovian process cannot be differentiable: we can
dene the derivative of a sample paths only with processes for which the past and future are not
statistically independent when conditioned on the present.
Let us denote the expectation conditioned on X
s
= x by E
s,x
. Notice that the denitions of the
drift and diffusion coefcients (5.5) and (5.6) can be written in the form
E
s,x
(X
t
X
s
) = a(x, s)(t s) + o(t s).
and
E
s,x
_
(X
t
X
s
) (X
t
X
s
)
_
= b(x, s)(t s) + o(t s).
Consequently, the drift coefcient denes the mean velocity vector for the stochastic process X
t
,
whereas the diffusion coefcient (tensor) is a measure of the local magnitude of uctuations of
X
t
X
s
about the mean value. hence, we can write locally:
X
t
X
s
a(s, X
s
)(t s) + (s, X
s
)
t
,
where b =
T
and
t
is a mean zero Gaussian process with
E
s,x
(
t

s
) = (t s)I.
Since we have that
W
t
W
s
^(0, (t s)I),
we conclude that we can write locally:
X
t
a(s, X
s
)t + (s, X
s
)W
t
.
Or, replacing the differences by differentials:
dX
t
= a(t, X
t
)dt + (t, X
t
)dW
t
.
Hence, the sample paths of a diffusion process are governed by a stochastic differential equation
(SDE).
85
5.6 Examples of Diffusion Processes
i. The 1-dimensional Brownian motion starting at x is a diffusion process with generator
L =
1
2
d
2
dx
2
.
The drift and diffusion coefcients are, respectively a(x) = 0 and b(x) = 1. The corre-
sponding stochastic differential equation is
dX
t
= dW
t
, X
0
= x.
The solution of this SDE is
X
t
= x + W
t
.
ii. The 1-dimensional Ornstein-Uhlenbeck process is a diffusion process with drift and diffusion
coefcients, respectively, a(x) = x and b(x) = D. The generator of this process is
L = x
d
dx
+
D
2
d
2
dx
2
.
The corresponding SDE is
dX
t
= X
t
dt +

DdW
t
.
The solution to this equation is
X
t
= e
t
X
0
+

D
_
t
0
e
(ts)
dW
s
.
5.7 Discussion and Bibliography
The argument used in the derivation of the forward and backward Kolmogorov equations goes back
to Kolmogorovs original work. More material on diffusion processes can be found in [36], [42].
5.8 Exercises
1. Prove equation (5.14).
2. Derive the initial value problem (5.16), (5.17).
3. Derive rigorously the backward and forward Kolmogorov equations in arbitrary dimensions.
86
Chapter 6
The Fokker-Planck Equation
6.1 Introduction
In the previous chapter we derived the backward and forward (Fokker-Planck) Kolmogorov equa-
tions and we showed that all statistical properties of a diffusion process can be calculated from the
solution of the Fokker-Planck equation.
1
In this long chapter we study various properties of this
equation such as existence and uniqueness of solutions, long time asymptotics, boundary condi-
tions and spectral properties of the Fokker-Planck operator. We also study in some detail various
examples of diffusion processes and of the associated Fokker-Palnck equation. We will restrict
attention to time-homogeneous diffusion processes, for which the drift and diffusion coefcients
do not depend on time.
In Section 6.2 we study various basic properties of the Fokker-Planck equation, including exis-
tence and uniqueness of solutions, writing the equation as a conservation law and boundary condi-
tions. In Section 6.3 we present some examples of diffusion processes and use the corresponding
Fokker-Planck equation in order to calculate various quantities of interest such as moments. In
Section 6.4 we study the multidimensional Onrstein-Uhlenbeck process and we study the spectral
properties of the corresponding Fokker-Planck operator. In Section 6.5 we study stochastic pro-
cesses whose drift is given by the gradient of a scalar function, gradient ows. In Section 6.7 we
solve the Fokker-Planck equation for a gradient SDE using eigenfunction expansions and we show
how the eigenvalue problem for the Fokker-Planck operator can be reduced to the eigenfunction
expansion for a Schr odinger operator. In Section 8.2 we study the Langevin equation and the as-
1
In this chapter we will call the equation Fokker-Planck, which is more customary in the physics literature. rather
forward Kolmogorov, which is more customary in the mathematics literature.
87
sociated Fokker-Planck equation. In Section 8.3 we calculate the eigenvalues and eigenfunctions
of the Fokker-Planck operator for the Langevin equation in a harmonic potential. Discussion and
bibliographical remarks are included in Section 6.8. Exercises can be found in Section 6.9.
6.2 Basic Properties of the FP Equation
6.2.1 Existence and Uniqueness of Solutions
Consider a homogeneous diffusion process on R
d
with drift vector and diffusion matrix a(x) and
b(x). The Fokker-Planck equation is
p
t
=
d

j=1

x
j
(a
i
(x)p) +
1
2
d

i,j=1

2
x
i
x
j
(b
ij
(x)p), t > 0, x R
d
, (6.1a)
p(x, 0) = f(x), x R
d
. (6.1b)
Since f(x) is the probability density of the initial condition (which is a random variable), we have
that
f(x) 0, and
_
R
d
f(x) dx = 1.
We can also write the equation in non-divergence form:
p
t
=
d

j=1
a
j
(x)
p
x
j
+
1
2
d

i,j=1

b
ij(x)

2
p
x
i
x
j
+ c(x)u, t > 0, x R
d
, (6.2a)
p(x, 0) = f(x), x R
d
, (6.2b)
where
a
i
(x) = a
i
(x) +
d

j=1
b
ij
x
j
, c
i
(x) =
1
2
d

i,j=1

2
b
ij
x
i
x
j

i=1
a
i
x
i
.
By denition (see equation (5.20)), the diffusion matrix is always symmetric and nonnegative.
We will assume that it is actually uniformly positive denite, i.e. we will impose the uniform
ellipticity condition:
d

i,j=1
b
ij
(x)
i

j
||
2
, R
d
, (6.3)
Furthermore, we will assume that the coefcients a, b, c are smooth and that they satisfy the growth
conditions
|b(x)| M, | a(x)| M(1 +|x|), | c(x)| M(1 +|x|
2
). (6.4)
88
Denition 6.2.1. We will call a solution to the Cauchy problem for the FokkerPlanck equa-
tion (6.2) a classical solution if:
i. u C
2,1
(R
d
, R
+
).
ii. T > 0 there exists a c > 0 such that
|u(t, x)|
L

(0,T)
ce
|x|
2
iii. lim
t0
u(t, x) = f(x).
It is a standard result in the theory of parabolic partial differential equations that, under the
regularity and uniform ellipticity assumptions, the Fokker-Planck equation has a unique smooth
solution. Furthermore, the solution can be estimated in terms of an appropriate heat kernel (i.e. the
solution of the heat equation on R
d
).
Theorem 6.2.2. Assume that conditions (6.3) and (6.4) are satised, and assume that [f[
ce
|x|
2
. Then there exists a unique classical solution to the Cauchy problem for the FokkerPlanck
equation. Furthermore, there exist positive constants K, so that
[p[, [p
t
[, |p|, |D
2
p| Kt
(n+2)/2
exp
_

1
2t
|x|
2
_
. (6.5)
Notice that from estimates (6.5) it follows that all moments of a uniformly elliptic diffusion
process exist. In particular, we can multiply the Fokker-Planck equation by monomials x
n
and
then to integrate over R
d
and to integrate by parts. No boundary terms will appear, in view of the
estimate (6.5).
Remark 6.2.3. The solution of the Fokker-Planck equation is nonnegative for all times, pro-
vided that the initial distribution is nonnegative. This is follows from the maximum principle
for parabolic PDEs.
6.2.2 The FP equation as a conservation law
The Fokker-Planck equation is in fact a conservation law: it expresses the law of conservation of
probability. To see this we dene the probability current to be the vector whose ith component is
J
i
:= a
i
(x)p
1
2
d

j=1

x
j
_
b
ij
(x)p
_
. (6.6)
89
We use the probability current to write the FokkerPlanck equation as a continuity equation:
p
t
+ J = 0.
Integrating the FP equation over R
d
and integrating by parts on the right hand side of the equation
we obtain
d
dt
_
R
d
p(x, t) dx = 0.
Consequently:
|p(, t)|
L
1
(R
d
)
= |p(, 0)|
L
1
(R
d
)
= 1. (6.7)
Hence, the total probability is conserved, as expected. Equation (6.7) simply means that
E(X
t
R
d
) = 1, t 0.
6.2.3 Boundary conditions for the FokkerPlanck equation
When studying a diffusion process that can take values on the whole of R
d
, then we study the
pure initial value (Cauchy) problem for the Fokker-Planck equation, equation (6.1). The boundary
condition was that the solution decays sufciently fast at innity. For ergodic diffusion processes
this is equivalent to requiring that the solution of the backward Kolmogorov equation is an element
of L
2
() where is the invariant measure of the process. There are many applications where it is
important to study stochastic process in bounded domains. In this case it is necessary to specify
the value of the stochastic process (or equivalently of the solution to the Fokker-Planck equation)
on the boundary.
To understand the type of boundary conditions that we can impose on the Fokker-Planck equa-
tion, let us consider the example of a random walk on the domain 0, 1, . . . N.
2
When the random
walker reaches either the left or the right boundary we can either set
i. X
0
= 0 or X
N
= 0, which means that the particle gets absorbed at the boundary;
ii. X
0
= X
1
or X
N
= X
N1
, which means that the particle is reected at the boundary;
iii. X
0
= X
N
, which means that the particle is moving on a circle (i.e., we identify the left and
right boundaries).
2
Of course, the random walk is not a diffusion process. However, as we have already seen the Brownian motion
can be dened as the limit of an appropriately rescaled random walk. A similar construction exists for more general
diffusion processes.
90
Hence, we can have absorbing, reecting or periodic boundary conditions.
Consider the Fokker-Planck equation posed in R
d
where is a bounded domain with
smooth boundary. Let J denote the probability current and let n be the unit outward pointing
normal vector to the surface. The above boundary conditions become:
i. The transition probability density vanishes on an absorbing boundary:
p(x, t) = 0, on .
ii. There is no net ow of probability on a reecting boundary:
n J(x, t) = 0, on .
iii. The transition probability density is a periodic function in the case of periodic boundary
conditions.
Notice that, using the terminology customary to PDEs theory, absorbing boundary conditions cor-
respond to Dirichlet boundary conditions and reecting boundary conditions correspond to Neu-
mann. Of course, on consider more complicated, mixed boundary conditions.
Consider now a diffusion process in one dimension on the interval [0, L]. The boundary condi-
tions are
p(0, t) = p(L, t) = 0 absorbing,
J(0, t)) = J(L, t) = 0 reecting,
p(0, t) = p(L, t) periodic,
where the probability current is dened in (6.6). An example of mixed boundary conditions would
be absorbing boundary conditions at the left end and reecting boundary conditions at the right
end:
p(0, t) = J(L, t) = 0.
There is a complete classication of boundary conditions in one dimension, the Feller classica-
tion: the BC can be regular, exit, entrance and natural.
91
6.3 Examples of Diffusion Processes
6.3.1 Brownian Motion
Brownian Motion on R
Set a(y, t) 0, b(y, t) 2D > 0. This diffusion process is the Brownian motion with diffusion
coefcient D. Let us calculate the transition probability density of this process assuming that
the Brownian particle is at y at time s. The Fokker-Planck equation for the transition probability
density p(x, t[y, s) is:
p
t
= D

2
p
x
2
, p(x, s[y, s) = (x y). (6.8)
The solution to this equation is the Greens function (fundamental solution) of the heat equation:
p(x, t[y, s) =
1
_
4D(t s)
exp
_

(x y)
2
4D(t s)
_
. (6.9)
Notice that using the Fokker-Planck equation for the Brownian motion we can immediately show
that the mean squared displacement grows linearly in time. Assuming that the Brownian particle
is at the origin at time t = 0 we get
d
dt
EW
2
t
=
d
dt
_
R
x
2
p(x, t[0, 0) dx
= D
_
R
x
2

2
p(x, t)
x
2
dx
= D
_
R
p(x, t[0, 0) dx = 2D,
where we performed two integrations by parts and we used the fact that, in view of (6.9), no
boundary terms remain. From this calculation we conclude that
EW
2
t
= 2Dt.
Assume now that the initial condition W
0
of the Brownian particle is a random variable with distri-
bution
0
(x). To calculate the probability density function (distribution function) of the Brownian
particle we need to solve the Fokker-Planck equation with initial condition
0
(x). In other words,
we need to take the average of the probability density function p(x, t[y, 0) over all initial real-
izations of the Brownian particle. The solution of the Fokker-Planck equation, the distribution
function, is
p(x, t) =
_
p(x, t[y, 0)
0
(y) dy. (6.10)
92
Notice that only the transition probability density depends on x and y only through their difference.
Thus, we can write p(x, t[y, 0) = p(x y, t). From (6.10) we see that the distribution function is
given by the convolution between the transition probability density and the initial condition, as we
know from the theory of partial differential equations.
p(x, t) =
_
p(x y, t)
0
(y) dy =: p
0
.
Brownian motion with absorbing boundary conditions
We can also consider Brownian motion in a bounded domain, with either absorbing, reecting or
periodic boundary conditions. Set D = 1 and consider the Fokker-Planck equation (6.8) on [0, 1]
with absorbing boundary conditions:
p
t
=
1
2

2
p
x
2
, p(0, t) = p(1, t) = 0. (6.11)
We look for a solution to this equation in a sine Fourier series:
p(x, t) =

k=1
p
n
(t) sin(nx). (6.12)
Notice that the boundary conditions are automatically satised. The initial condition is
p(x, 0) = (x x
0
),
where we have assumed that W
0
= x
0
. The Fourier coefcients of the initial conditions are
p
n
(0) = 2
_
1
0
(x x
0
) sin(nx) dx = 2 sin(nx
0
).
We substitute the expansion (6.12) into (6.11) and use the orthogonality properties of the Fourier
basis to obtain the equations
p
n
=
n
2

2
2
p
n
n = 1, 2, . . .
The solution of this equation is
p
n
(t) = p
n
(0)e

n
2

2
2
t
.
Consequently, the transition probability density for the Brownian motion on [0, 1] with absorbing
boundary conditions is
p(x, t[x
0
, 0) = 2

n=1
e

n
2

2
2
t
sin nx
0
sin(nx).
93
Notice that
lim
t
p(x, t[x
0
, 0) = 0.
This is not surprising, since all Brownian particles will eventually get absorbed at the boundary.
Brownian Motion with Reecting Boundary Condition
Consider now Brownian motion on the interval [0, 1] with reecting boundary conditions and set
D = 1 for simplicity. In order to calculate the transition probability density we have to solve the
Fokker-Planck equation which is the heat equation on [0, 1] with Neumann boundary conditions:
p
t
=
1
2

2
p
x
2
,
x
p(0, t) =
x
p(1, t) = 0, p(x, 0) = (x x
0
).
The boundary conditions are satised by functions of the form cos(nx). We look for a solution
in the form of a cosine Fourier series
p(x, t) =
1
2
a
0
+

n=1
a
n
(t) cos(nx).
From the initial conditions we obtain
a
n
(0) = 2
_
1
0
cos(nx)(x x
0
) dx = 2 cos(nx
0
).
We substitute the expansion into the PDE and use the orthonormality of the Fourier basis to obtain
the equations for the Fourier coefcients:
a
n
=
n
2

2
2
a
n
from which we deduce that
a
n
(t) = a
n
(0)e

n
2

2
2
t
.
Consequently
p(x, t[x
0
, 0) = 1 + 2

n=1
cos(nx
0
) cos(nx)e

n
2

2
2
t
.
Notice that Brownian motion with reecting boundary conditions is an ergodic Markov process.
To see this, let us consider the stationary Fokker-Planck equation

2
p
s
x
2
= 0,
x
p
s
(0) =
x
p
s
(1) = 0.
94
The unique normalized solution to this boundary value problem is p
s
(x) = 1. Indeed, we multiply
the equation by p
s
, integrate by parts and use the boundary conditions to obtain
_
1
0

dp
s
dx

2
dx = 0,
from which it follows that p
s
(x) = 1. Alternatively, by taking the limit of p(x, t[x
0
, 0) as t
we obtain the invariant distribution:
lim
t
p(x, t[x
0
, 0) = 1.
Now we can calculate the stationary autocorrelation function:
E(W(t)W(0)) =
_
1
0
_
1
0
xx
0
p(x, t[x
0
, 0)p
s
(x
0
) dxdx
0
=
_
1
0
_
1
0
xx
0
_
1 + 2

n=1
cos(nx
0
) cos(nx)e

n
2

2
2
t
_
dxdx
0
=
1
4
+
8

4
+

n=0
1
(2n + 1)
4
e

(2n+1)
2

2
2
t
.
6.3.2 The Ornstein-Uhlenbeck Process
We set now a(x, t) = x, b(x, t) = 2D > 0. With this drift and diffusion coefcients the
Fokker-Planck equation becomes
p
t
=
(xp)
x
+ D

2
p
x
2
. (6.13)
This is the Fokker-Planck equation for the Ornstein-Uhlenbeck process. The corresponding stochas-
tic differential equation is
dX
t
= X
t
+

2DdW
t
.
So, in addition to Brownian motion there is a linear force pulling the particle towards the origin.
We know that Brownian motion is not a stationary process, since the variance grows linearly in
time. By adding a linear damping term, it is reasonable to expect that the resulting process can be
stationary. As we have already seen, this is indeed the case.
The transition probability density p
OU
(x, t[y, s) for an OU particle that is located at y at time s
is
p
OU
(y, t[x, s) =
_

2D(1 e
2(ts)
)
exp
_

(x e
(ts)
y)
2
2D(1 e
2(ts)
)
_
. (6.14)
95
We obtained this formula in Example (4.2.4) (for = D = 1) by using the fact that the OU process
can be dened through the a time change of the Brownian motion. We can also derive it by solving
equation (6.13). To obtain (6.14), we rst take the Fourier transform of the transition probability
density with respect to x, solve the resulting rst order PDE using the method of characteristics
and then take the inverse Fourier transform
3
Notice that fromformula (6.14) it immediately follows that in the limit as the friction coefcient
goes to 0, the transition probability of the OU processes converges to the transition probability
of Brownian motion. Furthermore, by taking the long time limit in (6.14) we obtain (we have set
s = 0)
lim
t+
p
OU
(x, t[y, 0) =
_

2D
exp
_

x
2
2D
_
,
irrespective of the initial position y of the OU particle. This is to be expected, since as we have
already seen the Ornstein-Uhlenbeck process is an ergodic Markov process, with a Gaussian in-
variant distribution
p
s
(x) =
_

2D
exp
_

x
2
2D
_
. (6.15)
Using now (6.14) and (6.15) we obtain the stationary joint probability density
p
2
(x, t[y, 0) = p(x, t[y, 0)p
s
(y)
=

2D

1 e
2t
exp
_

(x
2
+ y
2
2xye
t
)
2D(1 e
2t
)
_
.
More generally, we have
p
2
(x, t[y, s) =

2D

1 e
2[ts[
exp
_

(x
2
+ y
2
2xye
[ts[
)
2D(1 e
2[ts[
)
_
. (6.16)
Now we can calculate the stationary autocorrelation function of the OU process
E(X(t)X(s)) =
_ _
xyp
2
(x, t[y, s) dxdy (6.17)
=
D

e
[ts[
. (6.18)
In order to calculate the double integral we need to perform an appropriate change of variables.
The calculation is similar to the one presented in Section 2.6. See Exercise 2.
3
This calculation will be presented in Section ?? for the Fokker-Planck equation of a linear SDE in arbitrary
dimensions.
96
Assume that initial position of the OU particle is a random variable distributed according to a
distribution
0
(x). As in the case of a Brownian particle, the probability density function (distri-
bution function) is given by the convolution integral
p(x, t) =
_
p(x y, t)
0
(y) dy, (6.19)
where p(x y, t) := p(x, t[y, 0). When the OU process is distributed initially according to its in-
variant distribution,
0
(x) = p
s
(x) given by (6.15), then the Ornstein-Uhlenbeck process becomes
stationary. The distribution function is given by p
s
(x) at all times and the joint probability density
is given by (6.16).
Knowledge of the distribution function enables us to calculate all moments of the OU process
using the formula
E((X
t
)
n
) =
_
x
n
p(x, t) dx,
We will calculate the moments by using the Fokker-Planck equation, rather than the explicit for-
mula for the transition probability density. Let M
n
(t) denote the nth moment of the OU process,
M
n
:=
_
R
x
n
p(x, t) dx, n = 0, 1, 2, . . . ,
Let n = 0. We integrate the FP equation over R to obtain:
_
p
t
=
_
(yp)
y
+ D
_

2
p
y
2
= 0,
after an integration by parts and using the fact that p(x, t) decays sufciently fast at innity. Con-
sequently:
d
dt
M
0
= 0 M
0
(t) = M
0
(0) = 1.
In other words, since
d
dt
|p|
L
1
(R)
= 0,
we deduce that
_
R
p(x, t) dx =
_
R
p(x, t = 0) dy = 1,
which means that the total probability is conserved, as we have already shown for the general
Fokker-Planck equation in arbitrary dimensions. Let n = 1. We multiply the FP equation for the
OU process by x, integrate over R and perform and integration by parts to obtain:
d
dt
M
1
= M
1
.
97
Consequently, the rst moment converges exponentially fast to 0:
M
1
(t) = e
t
M
1
(0).
Let now n 2. We multiply the FP equation for the OU process by x
n
and integrate by parts (once
on the rst term on the RHS and twice on the second) to obtain:
d
dt
_
y
n
p = n
_
y
n
p + Dn(n 1)
_
y
n2
p.
Or, equivalently:
d
dt
M
n
= nM
n
+ Dn(n 1)M
n2
, n 2.
This is a rst order linear inhomogeneous differential equation. We can solve it using the variation
of constants formula:
M
n
(t) = e
nt
M
n
(0) + Dn(n 1)
_
t
0
e
n(ts)
M
n2
(s) ds. (6.20)
We can use this formula, together with the formulas for the rst two moments in order to calculate
all higher order moments in an iterative way. For example, for n = 2 we have
M
2
(t) = e
2t
M
2
(0) + 2D
_
t
0
e
2(ts)
M
0
(s) ds
= e
2t
M
2
(0) +
D

e
2t
(e
2t
1)
=
D

+ e
2t
_
M
2
(0)
D

_
.
Consequently, the second moment converges exponentially fast to its stationary value
D
2
. The
stationary moments of the OU process are:
y
n
)
OU
:=
_

2D
_
R
y
n
e

y
2
2D
dx
=
_
1.3 . . . (n 1)
_
D

_
n/2
, neven,
0, nodd.
It is not hard to check that (see Exercise 3)
lim
t
M
n
(t) = y
n
)
OU
(6.21)
98
exponentially fast
4
. Since we have already shown that the distribution function of the OU process
converges to the Gaussian distribution in the limit as t +, it is not surprising that the moments
also converge to the moments of the invariant Gaussian measure. What is not so obvious is that the
convergence is exponentially fast. In the next section we will prove that the Ornstein-Uhlenbeck
process does, indeed, converge to equilibrium exponentially fast. Of course, if the initial conditions
of the OU process are stationary, then the moments of the OU process become independent of time
and given by their equilibrium values
M
n
(t) = M
n
(0) = x
n
)
OU
. (6.22)
6.3.3 The Geometric Brownian Motion
We set a(x) = x, b(x) =
1
2

2
x
2
. This is the geometric Brownian motion. The corresponding
stochastic differential equation is
dX
t
= X
t
dt + X
t
dW
t
.
This equation is one of the basic models in mathematical nance. The coefcient is called the
volatility. The generator of this process is
L = x

x
+
x
2
2

2
x
2
.
Notice that this operator is not uniformly elliptic. The Fokker-Planck equation of the geometric
Brownian motion is:
p
t
=

x
(x) +

2
x
2
_

2
x
2
2
p
_
.
We can easily obtain an equation for the nth moment of the geometric Brownian motion:
d
dt
M
n
=
_
n +

2
2
n(n 1)
_
M
n
, n 2.
The solution of this equation is
M
n
(t) = e
(+(n1)

2
2
)nt
M
n
(0), n 2
and
M
1
(t) = e
t
M
1
(0).
4
Of course, we need to assume that the initial distribution has nite moments of all orders in order to justify the
above calculations.
99
Notice that the nth moment might diverge as t , depending on the values of and . Consider
for example the second moment and assume that < 0. We have
M
n
(t) = e
(2+
2
)t
M
2
(0),
which diverges when
2
+ 2 > 0.
6.4 The Ornstein-Uhlenbeck Process and Hermite Polynomials
The Ornstein-Uhlenbeck process is one of the few stochastic processes for which we can calcu-
late explicitly the solution of the corresponding SDE, the solution of the Fokker-Planck equation
as well as the eigenfunctions of the generator of the process. In this section we will show that
the eigenfunctions of the OU process are the Hermite polynomials. We will also study various
properties of the generator of the OU process. In the next section we will show that many of the
properties of the OU process (ergodicity, self-adjointness of the generator, exponentially fast con-
vergence to equilibrium, real, discrete spectrum) are shared by a large class of diffusion processes,
namely those for which the drift term can be written in terms of the gradient of a smooth functions.
The generator of the d-dimensional OU process is (we set the drift coefcient equal to 1)
L = p
p
+
1

p
(6.23)
where denotes the inverse temperature. We have already seen that the OU process is an er-
godic Markov process whose unique invariant measure is absolutely continuous with respect to the
Lebesgue measure on R
d
with Gaussian density C

(R
d
)

(p) =
1
(2
1
)
d/2
e

|p|
2
2
.
The natural function space for studying the generator of the OU process is the L
2
-space weighted
by the invariant measure of the process. This is a separable Hilbert space with norm
|f|
2

:=
_
R
d
f
2

dp.
and corresponding inner product
(f, h)

=
_
R
fh

dp.
100
Similarly, we can dene weighted L
2
-spaced involving derivatives, i.e. weighted Sobolev spaces.
See Exercise .
The reason why this is the right function space in which to study questions related to conver-
gence to equilibrium is that the generator of the OU process becomes a self-adjoint operator in this
space. In fact, L dened in (6.23) has many nice properties that are summarized in the following
proposition.
Proposition 6.4.1. The operator L has the following properties:
i. For every f, h C
2
0
(R
d
) L
2

(R
d
),
(Lf, h)

= (f, Lh)

=
1
_
R
d
f h

dp. (6.24)
ii. L is a non-positive operator on L
2

.
iii. Lf = 0 iff f const.
iv. For every f C
2
0
(R
d
) L
2

(R
d
) with
_
f

= 0,
(Lf, f)

|f|
2

(6.25)
Proof. Equation (6.24) follows from an integration by parts:
(Lf, h)

=
_
p fh

dp +
1
_
fh

dp
=
_
p fh

dp
1
_
f h

dp +
_
p fh

dp
=
1
(f, h)

.
Non-positivity of L follows from (6.24) upon setting h = f:
(Lf, f)

=
1
|f|
2

0.
Similarly, multiplying the equation Lf = 0 by f

, integrating over R
d
and using (6.24) gives
|f|

= 0,
from which we deduce that f const. The spectral gap follows from (6.24), together with
Poincar es inequality for Gaussian measures:
_
R
d
f
2

dp
1
_
R
d
[f[
2

dp (6.26)
101
for every f H
1
(R
d
;

) with
_
f

= 0. Indeed, upon combining (6.24) with (6.26) we obtain:


(Lf, f)

=
1
|f|
2

|f|
2

The spectral gap of the generator of the OU process, which is equivalent to the compactness
of its resolvent, implies that L has discrete spectrum. Furthermore, since it is also a self-adjoint
operator, we have that its eigenfunctions form a countable orthonormal basis for the separable
Hilbert space L
2

. In fact, we can calculate the eigenvalues and eigenfunctions of the generator of


the OU process in one dimension.
5
Theorem 6.4.2. Consider the eigenvalue problem for the generator of the OU process in one
dimension
Lf
n
=
n
f
n
. (6.27)
Then the eigenvalues of L are the nonnegative integers:

n
= n, n = 0, 1, 2, . . . .
The corresponding eigenfunctions are the normalized Hermite polynomials:
f
n
(p) =
1

n!
H
n
_
_
p
_
, (6.28)
where
H
n
(p) = (1)
n
e
p
2
2
d
n
dp
n
_
e

p
2
2
_
. (6.29)
For the subsequent calculations we will need some additional properties of Hermite polynomi-
als which we state here without proof (we use the notation
1
= ).
Proposition 6.4.3. For each C, set
H(p; ) = e
p

2
2
, p R.
5
The multidimensional problem can be treated similarly by taking tensor products of the eigenfunctions of the one
dimensional problem.
102
Then
H(p; ) =

n=0

n
n!
H
n
(p), p R, (6.30)
where the convergence is both uniform on compact subsets of RC, and for s in compact subsets
of C, uniform in L
2
(C; ). In particular, f
n
(p) :=
1

n!
H
n
(

p) : n N is an orthonormal basis
in L
2
(C;

).
From (6.29) it is clear that H
n
is a polynomial of degree n. Furthermore, only odd (even)
powers appear in H
n
(p) when n is odd (even). Furthermore, the coefcient multiplying p
n
in H
n
(p)
is always 1. The orthonormality of the modied Hermite polynomials f
n
(p) dened in (6.28)
implies that
_
R
f
n
(p)f
m
(p)

(p) dp =
nm
.
The rst few Hermite polynomials and the corresponding rescaled/normalized eigenfunctions of
the generator of the OU process are:
H
0
(p) = 1, f
0
(p) = 1,
H
1
(p) = p, f
1
(p) =
_
p,
H
2
(p) = p
2
1, f
2
(p) =

2
p
2

2
,
H
3
(p) = p
3
3p, f
3
(p) =

3/2

6
p
3

6
p
H
4
(p) = p
4
3p
2
+ 3, f
4
(p) =
1

24
_

2
p
4
3p
2
+ 3
_
H
5
(p) = p
5
10p
3
+ 15p, f
5
(p) =
1

120
_

5/2
p
5
10
3/2
p
3
+ 15
1/2
p
_
.
The proof of Theorem 6.4.2 follows essentially from the properties of the Hermite polynomials.
First, notice that by combining (6.28) and (6.30) we obtain
H(
_
p, ) =
+

n=0

n!
f
n
(p)
We differentiate this formula with respect to p to obtain

_
H(
_
p, ) =
+

n=1

n!

p
f
n
(p),
103
since f
0
= 1. From this equation we obtain
H(
_
p, ) =
+

n=1

n1

n!

p
f
n
(p)
=
+

n=0

_
(n + 1)!

p
f
n+1
(p)
from which we deduce that
1

p
f
k
=

kf
k1
. (6.31)
Similarly, if we differentiate (6.30) with respect to we obtain
(p )H(p; ) =
+

k=0

k
k!
pH
k
(p)
+

k=1

k
(k 1)!
H
k1
(p)
+

k=0

k
k!
H
k+1
(p)
from which we obtain the recurrence relation
pH
k
= H
k+1
+ kH
k1
.
Upon rescaling, we deduce that
pf
k
=
_

1
(k + 1)f
k+1
+
_

1
kf
k1
. (6.32)
We combine now equations (6.31) and (6.32) to obtain
_
_
p
1

p
_
f
k
=

k + 1f
k+1
. (6.33)
Now we observe that
Lf
n
=
_
_
p
1

p
_
1

p
f
n
=
_
_
p
1

p
_

nf
n1
= nf
n
.
The operators
_

p
1

p
_
and
1

p
play the role of creation and annihilation operators.
In fact, we can generate all eigenfunctions of the OU operator from the ground state f
0
= 0
through a repeated application of the creation operator.
Proposition 6.4.4. Set = 1 and let a

=
p
. Then the L
2

-adjoint of a
+
is
a
+
=
p
+ p.
104
Then the generator of the OU process can be written in the form
L = a
+
a

.
Furthermore, a
+
and a

satisfy the following commutation relation


[a
+
, a

] = 1
Dene now the creation and annihilation operators on C
1
(R) by
S
+
=
1
_
(n + 1)
a
+
and
S

=
1

n
a

.
Then
S
+
f
n
= f
n+1
and S

f
n
= f
n1
. (6.34)
In particular,
f
n
=
1

n!
(a
+
)
n
1 (6.35)
and
1 =
1

n!
(a

)
n
f
n
. (6.36)
Proof. let f, h C
1
(R) L
2

. We calculate
_

p
fh =
_
f
p
(h) (6.37)
=
_
f
_

p
+ p
_
h. (6.38)
Now,
a
+
a

= (
p
+ p)
p
=
p
p
p
= L.
Similarly,
a

a
+
=
2
p
+ p
p
+ 1.
and
[a
+
, a

] = 1
Forumlas (6.34) follow from (6.31) and (6.33). Finally, formulas (6.35) and (6.36) are a conse-
quence of (6.31) and (6.33), together with a simple induction argument.
105
Notice that upon using (6.35) and (6.36) and the fact that a
+
is the adjoint of a

we can easily
check the orthonormality of the eigenfunctions:
_
f
n
f
m
=
1

m!
_
f
n
(a

)
m
1
=
1

m!
_
(a

)
m
f
n

=
_
f
nm
=
nm
.
From the eigenfunctions and eigenvalues of L we can easily obtain the eigenvalues and eigenfunc-
tions of L

, the Fokker-Planck operator.


Lemma 6.4.5. The eigenvalues and eigenfunctions of the Fokker-Planck operator
L

=
2
p
+
p
(p)
are

n
= n, n = 0, 1, 2, . . . and f

n
= f
n
.
Proof. We have
L

(f
n
) = f
n
L

+ Lf
n
= nf
n
.
An immediate corollary of the above calculation is that we can the nth eigenfunction of the
Fokker-Planck operator is given by
f

n
= (p)
1
n!
(a
+
)
n
1.
6.5 Reversible Diffusions
The stationary Ornstein-Uhlenbeck process is an example of a reversible Markov process:
Denition 6.5.1. A stationary stochastic process X
t
is time reversible if for every m N and
every t
1
, t
2
, . . . , t
m
R
+
, the joint probability distribution is invariant under time reversals:
p(X
t
1
, X
t
2
, . . . , X
t
m
) = p(X
t
1
, X
t
2
, . . . , X
t
m
). (6.39)
106
In this section we study a more general class (in fact, as we will see later the most general
class) of reversible Markov processes, namely stochastic perturbations of ODEs with a gradient
structure.
Let V (x) =
1
2
x
2
. The generator of the OU process can be written as:
L =
x
V
x
+
1

2
x
.
Consider diffusion processes with a potential V (x), not necessarily quadratic:
L = V (x) +
1
(6.40)
In applications of (6.40) to statistical mechanics the diffusion coefcient
1
= k
B
T where k
B
is
Boltzmanns constant and T the absolute temperature. The corresponding stochastic differential
equation is
dX
t
= V (X
t
) dt +
_
2
1
dW
t
. (6.41)
Hence, we have a gradient ODE

X
t
= V (X
t
) perturbed by noise due to thermal uctuations.
The corresponding FP equation is:
p
t
= (V p) +
1
p. (6.42)
It is not possible to calculate the time dependent solution of this equation for an arbitrary potential.
We can, however, always calculate the stationary solution, if it exists.
Denition 6.5.2. A potential V will be called conning if lim
[x[+
V (x) = +and
e
V (x)
L
1
(R
d
). (6.43)
for all R
+
.
Gradient SDEs in a conning potential are ergodic:
Proposition 6.5.3. Let V (x) be a smooth conning potential. Then the Markov process with gen-
erator (6.40) is ergodic. The unique invariant distribution is the Gibbs distribution
p(x) =
1
Z
e
V (x)
(6.44)
where the normalization factor Z is the partition function
Z =
_
R
d
e
V (x)
dx.
107
The fact that the Gibbs distribution is an invariant distribution follows by direct substitution.
Uniqueness follows from a PDEs argument (see discussion below). It is more convenient to nor-
malize the solution of the Fokker-Planck equation with respect to the invariant distribution.
Theorem6.5.4. Let p(x, t) be the solution of the Fokker-Planck equation (6.42), assume that (6.43)
holds and let (x) be the Gibbs distribution (11.4). Dene h(x, t) through
p(x, t) = h(x, t)(x).
Then the function h satises the backward Kolmogorov equation:
h
t
= V h +
1
h, h(x, 0) = p(x, 0)
1
(x). (6.45)
Proof. The initial condition follows from the denition of h. We calculate the gradient and Lapla-
cian of p:
p = h hV
and
p = h 2V h + hV + h[V [
2

2
.
We substitute these formulas into the FP equation to obtain

h
t
=
_
V h +
1
h
_
,
from which the claim follows.
Consequently, in order to study properties of solutions to the FP equation, it is sufcient to
study the backward equation (6.45). The generator L is self-adjoint, in the right function space.
We dene the weighted L
2
space L
2

:
L
2

=
_
f[
_
R
d
[f[
2
(x) dx <
_
,
where (x) is the Gibbs distribution. This is a Hilbert space with inner product
(f, h)

=
_
R
d
fh(x) dx.
Theorem 6.5.5. Assume that V (x) is a smooth potential and assume that condition (6.43) holds.
Then the operator
L = V (x) +
1

is self-adjoint in L
2

. Furthermore, it is non-positive, its kernel consists of constants.


108
Proof. Let f, C
2
0
(R
d
). We calculate
(Lf, h)

=
_
R
d
(V +
1
)fh dx
=
_
R
d
(V f)h dx
1
_
R
d
fh dx
1
_
R
d
fh dx
=
1
_
R
d
f h dx,
from which self-adjointness follows.
If we set f = h in the above equation we get
(Lf, f)

=
1
|f|
2

,
which shows that L is non-positive.
Clearly, constants are in the null space of L. Assume that f ^(L). Then, from the above
equation we get
0 =
1
|f|
2

,
and, consequently, f is a constant.
Remark 6.5.6. The expression (Lf, f)

is called the Dirichlet form of the operator L. In the


case of a gradient ow, it takes the form
(Lf, f)

=
1
|f|
2

. (6.46)
Using the properties of the generator L we can show that the solution of the Fokker-Planck
equation converges to the Gibbs distribution exponentially fast. For this we need to use the fact
that, under appropriate assumptions on the potential V , the Gibbs measure (dx) = Z
1
e
V (x)
satises Poincar` es inequality:
Theorem 6.5.7. Assume that the potential V satises the convexity condition
D
2
V I.
Then the corresponding Gibbs measure satises the Poincar e inequality with constant :
_
R
d
f = 0 |f|

|f|

. (6.47)
109
Theorem 6.5.8. Assume that p(x, 0) L
2
(e
V
). Then the solution p(x, t) of the Fokker-Planck
equation (6.42) converges to the Gibbs distribution exponentially fast:
|p(, t) Z
1
e
V
|

1 e
Dt
|p(, 0) Z
1
e
V
|

1. (6.48)
Proof. We Use (6.45), (6.46) and (6.47) to calculate

d
dt
|(h 1)|
2

= 2
_
h
t
, h 1
_

= 2 (Lh, h 1)

= (L(h 1), h 1)

= 2D|(h 1)|

2
1
|h 1|
2

.
Our assumption on p(, 0) implies that h(, 0) L
2

. Consequently, the above calculation shows


that
|h(, t) 1|

1
t
|h(, 0) 1|

.
This, and the denition of h, p = h, lead to (6.48).
Remark 6.5.9. The assumption
_
R
d
[p(x, 0)[
2
Z
1
e
V
<
is very restrictive (think of the case where V = x
2
). The function space L
2
(
1
) = L
2
(e
V
) in
which we prove convergence is not the right space to use. Since p(, t) L
1
, ideally we would like
to prove exponentially fast convergence in L
1
. We can prove convergence in L
1
using the theory of
logarithmic Sobolev inequalities. In fact, we can also prove convergence in relative entropy:
H(p[
V
) :=
_
R
d
p ln
_
p

V
_
dx.
The relative entropy norm controls the L
1
norm:
|
1

2
|
2
L
1 CH(
1
[
2
)
Using a logarithmic Sobolev inequality, we can prove exponentially fast convergence to equilib-
rium, assuming only that the relative entropy of the initial conditions is nite.
A much sharper version of the theorem of exponentially fast convergence to equilibrium is the
following:
110
Theorem 6.5.10. Let p denote the solution of the FokkerPlanck equation (6.42) where the poten-
tial is smooth and uniformly convex. Assume that the the initial conditions satisfy
H(p(, 0)[
V
) < .
Then p converges to the Gibbs distribution exponentially fast in relative entropy:
H(p(, t)[
V
) e

1
t
H(p(, 0)[
V
).
Self-adjointness of the generator of a diffusion process is equivalent to time-reversibility.
Theorem 6.5.11. Let X
t
be a stationary Markov process in R
d
with generator
L = b(x) +
1

and invariant measure . Then the following three statements are equivalent.
i. The process it time-reversible.
ii. Its generator of the process is symmetric in L
2
(R
d
; (dx)).
iii. There exists a scalar function V (x) such that
b(x) = V (x).
6.5.1 Markov Chain Monte Carlo (MCMC)
The Smoluchowski SDE (6.41) has a very interesting application in statistics. Suppose we want to
sample froma probability distribution (x). One method for doing this is by generating the dynam-
ics whose invariant distribution is precisely (x). In particular, we consider the Smolochuwoski
equation
dX
t
= ln((X
t
)) dt +

2dW
t
. (6.49)
Assuming that ln((x)) is a conning potential, then X
t
is an ergodic Markov process with
invariant distribution (x). Furthermore, the law of X
t
converges to (x) exponentially fast:
|
t
|
L
1 e
t
|
0
|
L
1.
The exponent is related to the spectral gap of the generator L =
1
(x)
(x) + . This
technique for sampling from a given distribution is an example of the Markov Chain Monte
Carlo (MCMC) methodology.
111
6.6 Perturbations of non-Reversible Diffusions
We can add a perturbation to a non-reversible diffusion without changing the invariant distribution
Z
1
e
V
.
Proposition 6.6.1. Let V (x) be a conning potential, (x) a smooth vector eld and consider the
diffusion process
dX
t
= (V (X
t
) + (x)) dt +
_
2
1
dW
t
. (6.50)
Then the invariant measure of the process X
t
is the Gibbs measure (dx) =
1
Z
e
V (x)
dx if and
only if (x) is divergence-free with respect to the density of this measure:

_
(x)e
V (x))
_
= 0. (6.51)
6.7 Eigenfunction Expansions
Consider the generator of a gradient stochastic ow with a uniformly convex potential
L = V + D. (6.52)
We know that L is a non-positive self-adjoint operator on L
2

and that it has a spectral gap:


(Lf, f)

D|f|
2

where is the Poincar e constant of the potential V (i.e. for the Gibbs measure Z
1
e
V (x)
dx).
The above imply that we can study the spectral problem for L:
Lf
n
=
n
f
n
, n = 0, 1, . . .
The operator L has real, discrete spectrum with
0 =
0
<
1
<
2
< . . .
Furthermore, the eigenfunctions f
j

j=1
form an orthonormal basis in L
2

: we can express every


element of L
2

in the form of a generalized Fourier series:


=

n=0

n
f
n
,
n
= (, f
n
)

(6.53)
112
with (f
n
, f
m
)

=
nm
. This enables us to solve the time dependent FokkerPlanck equation in
terms of an eigenfunction expansion. Consider the backward Kolmogorov equation (6.45). We
assume that the initial conditions h
0
(x) = (x) L
2

and consequently we can expand it in the


form (6.53). We look for a solution of (6.45) in the form
h(x, t) =

n=0
h
n
(t)f
n
(x).
We substitute this expansion into the backward Kolmogorov equation:
h
t
=

n=0

h
n
f
n
= L
_

n=0
h
n
f
n
_
(6.54)
=

n=0

n
h
n
f
n
. (6.55)
We multiply this equation by f
m
, integrate wrt the Gibbs measure and use the orthonormality of
the eigenfunctions to obtain the sequence of equations

h
n
=
n
h
n
, n = 0, 1,
The solution is
h
0
(t) =
0
, h
n
(t) = e

n
t

n
, n = 1, 2, . . .
Notice that
1 =
_
R
d
p(x, 0) dx =
_
R
d
p(x, t) dx
=
_
R
d
h(x, t)Z
1
e
V
dx = (h, 1)

= (, 1)

=
0
.
Consequently, the solution of the backward Kolmogorov equation is
h(x, t) = 1 +

n=1
e

n
t

n
f
n
.
This expansion, together with the fact that all eigenvalues are positive (n 1), shows that the
solution of the backward Kolmogorov equation converges to 1 exponentially fast. The solution of
the FokkerPlanck equation is
p(x, t) = Z
1
e
V (x)
_
1 +

n=1
e

n
t

n
f
n
_
.
113
6.7.1 Reduction to a Schr odinger Equation
Lemma 6.7.1. The FokkerPlanck operator for a gradient ow can be written in the self-adjoint
form
p
t
= D
_
e
V/D

_
e
V/D
p
__
. (6.56)
Dene now (x, t) = e
V/2D
p(x, t). Then solves the PDE

t
= D U(x), U(x) :=
[V [
2
4D

V
2
. (6.57)
Let H := D + U. Then L

and H have the same eigenvalues. The nth eigenfunction


n
of L

and the nth eigenfunction


n
of H are associated through the transformation

n
(x) =
n
(x) exp
_
V (x)
2D
_
.
Remarks 6.7.2. i. From equation (6.56) shows that the FP operator can be written in the form
L

= D
_
e
V/D

_
e
V/D

__
.
ii. The operator that appears on the right hand side of eqn. (6.57) has the formof a Schr odinger
operator:
H = D + U(x).
iii. The spectral problem for the FP operator can be transformed into the spectral problem for
a Schr odinger operator. We can thus use all the available results from quantum mechanics
to study the FP equation and the associated SDE.
iv. In particular, the weak noise asymptotics D 1 is equivalent to the semiclassical approxi-
mation from quantum mechanics.
Proof. We calculate
D
_
e
V/D

_
e
V/D
f
__
= D
_
e
V/D
_
D
1
V f +f
_
e
V/D
_
= (V f + Df) = L

f.
Consider now the eigenvalue problem for the FP operator:
L

n
=
n

n
.
114
Set
n
=
n
exp
_

1
2D
V
_
. We calculate L

n
:
L

n
= D
_
e
V/D

_
e
V/D

n
e
V/2D
__
= D
_
e
V/D
_

n
+
V
2D

n
_
e
V/2D
_
=
_
D
n
+
_

[V [
2
4D
+
V
2D
_

n
_
e
V/2D
= e
V/2D
H
n
.
From this we conclude that e
V/2D
H
n
=
n

n
e
V/2D
from which the equivalence between the
two eigenvalue problems follows.
Remarks 6.7.3. i. We can rewrite the Schr odinger operator in the form
H = D/

/, / = +
U
2D
, /

= +
U
2D
.
ii. These are creation and annihilation operators. They can also be written in the form
/ = e
U/2D

_
e
U/2D

_
, /

= e
U/2D

_
e
U/2D

_
iii. The forward the backward Kolmogorov operators have the same eigenvalues. Their eigen-
functions are related through

B
n
=
F
n
exp (V/D) ,
where
B
n
and
F
n
denote the eigenfunctions of the backward and forward operators, respec-
tively.
6.8 Discussion and Bibliography
The proof of existence and uniqueness of classical solutions for the Fokker-Planck equation of a
uniformly elliptic diffusion process with smooth drift and diffusion coefcients, Theorem 6.2.2,
can be found in [30]. A standard textbook on PDEs, with a lot of material on parabolic PDEs
is [22], particularly Chapters 2 and 7 in this book.
It is important to emphasize that the condition that solutions to the Fokker-Planck equation
do not grow too fast, see Denition 6.2.1, is necessary to ensure uniqueness. In fact, there are
innitely many solutions of
p
t
= p in R
d
(0, T)
p(x, 0) = 0.
115
Each of these solutions besides the trivial solution p = 0 grows very rapidly as x +. More
details can be found in [44, Ch. 7].
The Fokker-Planck equation is studied extensively in Riskens monograph [82]. See also [35]
and [42]. The connection between the Fokker-Planck equation and stochastic differential equations
is presented in Chapter 7. See also [1, 31, 32].
Hermite polynomials appear very frequently in applications and they also play a fundamental
role in analysis. It is possible to prove that the Hermite polynomials form an orthonormal basis
for L
2
(R
d
,

) without using the fact that they are the eigenfunctions of a symmetric operator with
compact resolvent.
6
The proof of Proposition 6.4.1 can be found in [90], Lemma 2.3.4 in particular.
Diffusion processes in one dimension are studied in [61]. The Feller classication for one
dimensional diffusion processes can be also found in [45, 24].
Convergence to equilibrium for kinetic equations (such as the Fokker-Planck equation) both
linear and non-linear (e.g., the Boltzmann equation) has been studied extensively. It has been
recognized that the relative entropy and logarithmic Sobolev inequalities play an important role in
the analysis of the problem of convergence to equilibrium. For more information see [62].
6.9 Exercises
1. Solve equation (6.13) by taking the Fourier transform, using the method of characteristics for
rst order PDEs and taking the inverse Fourier transform.
2. Use the formula for the stationary joint probability density of the Ornstein-Uhlenbeck process,
eqn. (6.17) to obtain the stationary autocorrelation function of the OU process.
3. Use (6.20) to obtain formulas for the moments of the OU process. Prove, using these formulas,
that the moments of the OU process converge to their equilibrium values exponentially fast.
4. Show that the autocorrelation function of the stationary Ornstein-Uhlenbeck is
E(X
t
X
0
) =
_
R
_
R
xx
0
p
OU
(x, t[x
0
, 0)p
s
(x
0
) dxdx
0
=
D
2
e
[t[
,
6
In fact, Poincar es inequality for Gaussian measures can be proved using the fact that that the Hermite polynomials
form an orthonormal basis for L
2
(R
d
,

).
116
where p
s
(x) denotes the invariant Gaussian distribution.
5. Let X
t
be a one-dimensional diffusion process with drift and diffusion coefcients a(y, t) =
a
0
a
1
y and b(y, t) = b
0
+ b
1
y + b
2
y
2
where a
i
, b
i
0, i = 0, 1, 2.
(a) Write down the generator and the forward and backward Kolmogorov equations for X
t
.
(b) Assume that X
0
is a random variable with probability density
0
(x) that has nite mo-
ments. Use the forward Kolmogorov equation to derive a system of differential equations
for the moments of X
t
.
(c) Find the rst three moments M
0
, M
1
, M
2
in terms of the moments of the initial distribu-
tion
0
(x).
(d) Under what conditions on the coefcients a
i
, b
i
0, i = 0, 1, 2 is M
2
nite for all times?
6. Let V be a conning potential in R
d
, > 0 and let

(x) = Z
1
e
V (x)
. Give the denition of
the Sobolev space H
k
(R
d
;

) for k a positive integer and study some of its basic properties.


7. Let X
t
be a multidimensional diffusion process on [0, 1]
d
with periodic boundary conditions.
The drift vector is a periodic function a(x) and the diffusion matrix is 2DI, where D > 0 and
I is the identity matrix.
(a) Write down the generator and the forward and backward Kolmogorov equations for X
t
.
(b) Assume that a(x) is divergence-free ( a(x) = 0). Show that X
t
is ergodic and nd the
invariant distribution.
(c) Show that the probability density p(x, t) (the solution of the forward Kolmogorov equa-
tion) converges to the invariant distribution exponentially fast in L
2
([0, 1]
d
). (Hint: Use
Poincar es inequality on [0, 1]
d
).
8. The Rayleigh process X
t
is a diffusion process that takes values on (0, +) with drift and
diffusion coefcients a(x) = ax +
D
x
and b(x) = 2D, respectively, where a, D > 0.
(a) Write down the generator the forward and backward Kolmogorov equations for X
t
.
(b) Show that this process is ergodic and nd its invariant distribution.
(c) Solve the forward Kolmogorov (Fokker-Planck) equation using separation of variables.
(Hint: Use Laguerre polynomials).
117
9. Let x(t) = x(t), y(t) be the two-dimensional diffusion process on [0, 2]
2
with periodic
boundary conditions with drift vector a(x, y) = (sin(y), sin(x)) and diffusion matrix b(x, y)
with b
11
= b
22
= 1, b
12
= b
21
= 0.
(a) Write down the generator of the process x(t), y(t) and the forward and backward Kol-
mogorov equations.
(b) Show that the constant function

s
(x, y) = C
is the unique stationary distribution of the process x(t), y(t) and calculate the normal-
ization constant.
(c) Let E denote the expectation with respect to the invariant distribution
s
(x, y). Calculate
E
_
cos(x) + cos(y)
_
and E(sin(x) sin(y)).
10. Let a, D be positive constants and let X(t) be the diffusion process on [0, 1] with periodic
boundary conditions and with drift and diffusion coefcients a(x) = a and b(x) = 2D, respec-
tively. Assume that the process starts at x
0
, X(0) = x
0
.
(a) Write down the generator of the process X(t) and the forward and backward Kolmogorov
equations.
(b) Solve the initial/boundary value problem for the forward Kolmogorov equation to calcu-
late the transition probability density p(x, t[x
0
, 0).
(c) Show that the process is ergodic and calculate the invariant distribution p
s
(x).
(d) Calculate the stationary autocorrelation function
E(X(t)X(0)) =
_
1
0
_
1
0
xx
0
p(x, t[x
0
, 0)p
s
(x
0
) dxdx
0
.
118
Chapter 7
Stochastic Differential Equations
7.1 Introduction
In this part of the course we will study stochastic differential equation (SDEs): ODEs driven by
Gaussian white noise.
Let W(t) denote a standard mdimensional Brownian motion, h : Z R
d
a smooth vector-
valued function and : Z R
dm
a smooth matrix valued function (in this course we will take
Z = T
d
, R
d
or R
l
T
dl
. Consider the SDE
dz
dt
= h(z) + (z)
dW
dt
, z(0) = z
0
. (7.1)
We think of the term
dW
dt
as representing Gaussian white noise: a mean-zero Gaussian process
with correlation (t s)I. The function h in (7.1) is sometimes referred to as the drift and as
the diffusion coefcient. Such a process exists only as a distribution. The precise interpretation
of (7.1) is as an integral equation for z(t) C(R
+
, Z):
z(t) = z
0
+
_
t
0
h(z(s))ds +
_
t
0
(z(s))dW(s). (7.2)
In order to make sense of this equation we need to dene the stochastic integral against W(s).
7.2 The It o and Stratonovich Stochastic Integral
For the rigorous analysis of stochastic differential equations it is necessary to dene stochastic
integrals of the form
I(t) =
_
t
0
f(s) dW(s), (7.3)
119
where W(t) is a standard one dimensional Brownian motion. This is not straightforward because
W(t) does not have bounded variation. In order to dene the stochastic integral we assume that
f(t) is a random process, adapted to the ltration T
t
generated by the process W(t), and such that
E
__
T
0
f(s)
2
ds
_
< .
The It o stochastic integral I(t) is dened as the L
2
limit of the Riemann sum approximation of
(7.3):
I(t) := lim
K
K1

k=1
f(t
k1
) (W(t
k
) W(t
k1
)) , (7.4)
where t
k
= kt and Kt = t. Notice that the function f(t) is evaluated at the left end of each
interval [t
n1
, t
n
] in (7.4). The resulting It o stochastic integral I(t) is a.s. continuous in t. These
ideas are readily generalized to the case where W(s) is a standard d dimensional Brownian motion
and f(s) R
md
for each s.
The resulting integral satises the It o isometry
E[I(t)[
2
=
_
t
0
E[f(s)[
2
F
ds, (7.5)
where [ [
F
denotes the Frobenius norm [A[
F
=
_
tr(A
T
A). The It o stochastic integral is a
martingale:
EI(t) = 0
and
E[I(t)[T
s
] = I(s) t s,
where T
s
denotes the ltration generated by W(s).
Example 7.2.1. Consider the It o stochastic integral
I(t) =
_
t
0
f(s) dW(s),
where f, W are scalarvalued. This is a martingale with quadratic variation
I)
t
=
_
t
0
(f(s))
2
ds.
More generally, for f, W in arbitrary nite dimensions, the integral I(t) is a martingale
with quadratic variation
I)
t
=
_
t
0
(f(s) f(s)) ds.
120
7.2.1 The Stratonovich Stochastic Integral
In addition to the It o stochastic integral, we can also dene the Stratonovich stochastic integral. It
is dened as the L
2
limit of a different Riemann sum approximation of (7.3), namely
I
strat
(t) := lim
K
K1

k=1
1
2
_
f(t
k1
) + f(t
k
)
_
(W(t
k
) W(t
k1
)) , (7.6)
where t
k
= kt and Kt = t. Notice that the function f(t) is evaluated at both endpoints of each
interval [t
n1
, t
n
] in (7.6). The multidimensional Stratonovich integral is dened in a similar way.
The resulting integral is written as
I
strat
(t) =
_
t
0
f(s) dW(s).
The limit in (7.6) gives rise to an integral which differs from the It o integral. The situation is
more complex than that arising in the standard theory of Riemann integration for functions of
bounded variation: in that case the points in [t
k1
, t
k
] where the integrand is evaluated do not effect
the denition of the integral, via a limiting process. In the case of integration against Brownian
motion, which does not have bounded variation, the limits differ. When f and W are correlated
through an SDE, then a formula exists to convert between them.
7.3 Stochastic Differential Equations
Denition 7.3.1. By a solution of (7.1) we mean a Z-valued stochastic process z(t) on t [0, T]
with the properties:
i. z(t) is continuous and T
t
adapted, where the ltration is generated by the Brownian motion
W(t);
ii. h(z(t)) L
1
((0, T)), (z(t)) L
2
((0, T));
iii. equation (7.1) holds for every t [0, T] with probability 1.
The solution is called unique if any two solutions x
i
(t), i = 1, 2 satisfy
P(x
1
(t) = x
2
(t), t [0.T]) = 1.
121
It is well known that existence and uniqueness of solutions for ODEs (i.e. when 0 in (7.1))
holds for globally Lipschitz vector elds h(x). A very similar theorem holds when ,= 0. As for
ODEs the conditions can be weakened, when a priori bounds on the solution can be found.
Theorem 7.3.2. Assume that both h() and () are globally Lipschitz on Z and that z
0
is a random
variable independent of the Brownian motion W(t) with
E[z
0
[
2
< .
Then the SDE (7.1) has a unique solution z(t) C(R
+
; Z) with
E
__
T
0
[z(t)[
2
dt
_
< T < .
Furthermore, the solution of the SDE is a Markov process.
The Stratonovich analogue of (7.1) is
dz
dt
= h(z) + (z)
dW
dt
, z(0) = z
0
. (7.7)
By this we mean that z C(R
+
, Z) satises the integral equation
z(t) = z(0) +
_
t
0
h(z(s))ds +
_
t
0
(z(s)) dW(s). (7.8)
By using denitions (7.4) and (7.6) it can be shown that z satisfying the Stratonovich SDE (7.7)
also satises the It o SDE
dz
dt
= h(z) +
1
2

_
(z)(z)
T
_

1
2
(z)
_
(z)
T
_
+ (z)
dW
dt
, (7.9a)
z(0) = z
0
, (7.9b)
provided that (z) is differentiable. White noise is, in most applications, an idealization of a sta-
tionary random process with short correlation time. In this context the Stratonovich interpretation
of an SDE is particularly important because it often arises as the limit obtained by using smooth
approximations to white noise. On the other hand the martingale machinery which comes with
the It o integral makes it more important as a mathematical object. It is very useful that we can
convert from the It o to the Stratonovich interpretation of the stochastic integral. There are other
interpretations of the stochastic integral, e.g. the Klimontovich stochastic integral.
122
The Denition of Brownian motion implies the scaling property
W(ct) =

cW(t),
where the above should be interpreted as holding in law. From this it follows that, if s = ct, then
dW
ds
=
1

c
dW
dt
,
again in law. Hence, if we scale time to s = ct in (7.1), then we get the equation
dz
ds
=
1
c
h(z) +
1

c
(z)
dW
ds
, z(0) = z
0
.
7.3.1 Examples of SDEs
The SDE for Brownian motion is:
dX =

2dW, X(0) = x.
The Solution is:
X(t) = x + W(t).
The SDE for the Ornstein-Uhlenbeck process is
dX = X dt +

2dW, X(0) = x.
We can solve this equation using the variation of constants formula:
X(t) = e
t
x +

2
_
t
0
e
(ts)
dW(s).
We can use It os formula to obtain equations for the moments of the OU process. The generator is:
L = x
x
+
2
x
.
We apply It os formula to the function f(x) = x
n
to obtain:
dX(t)
n
= LX(t)
n
dt +

2X(t)
n
dW
= nX(t)
n
dt + n(n 1)X(t)
n2
dt + n

2X(t)
n1
dW.
123
Consequently:
X(t)
n
= x
n
+
_
t
0
_
nX(t)
n
+ n(n 1)X(t)
n2
_
dt
+n

2
_
t
0
X(t)
n1
dW.
By taking the expectation in the above equation we obtain the equation for the moments of the OU
process that we derived earlier using the Fokker-Planck equation:
M
n
(t) = x
n
+
_
t
0
(nM
n
(s) + n(n 1)M
n2
(s)) ds.
Consider the geometric Brownian motion
dX(t) = X(t) dt + X(t) dW(t), (7.10)
where we use the It o interpretation of the stochastic differential. The generator of this process is
L = x
x
+

2
x
2
2

2
x
.
The solution to this equation is
X(t) = X(0) exp
_
(

2
2
)t + W(t)
_
. (7.11)
To derive this formula, we apply It os formula to the function f(x) = log(x):
d log(X(t)) = L
_
log(X(t))
_
dt + x
x
log(X(t)) dW(t)
=
_
x
1
x
+

2
x
2
2
_

1
x
2
__
dt + dW(t)
=
_


2
2
_
dt + dW(t).
Consequently:
log
_
X(t)
X(0)
_
=
_


2
2
_
t + W(t)
from which (7.11) follows. Notice that the Stratonovich interpretation of this equation leads to the
solution
X(t) = X(0) exp(t + W(t))
124
7.4 The Generator, It os formula and the Fokker-Planck Equa-
tion
7.4.1 The Generator
Given the function (z) in the SDE (7.1) we dene
(z) = (z)(z)
T
. (7.12)
The generator L is then dened as
Lv = h v +
1
2
: v. (7.13)
This operator, equipped with a suitable domain of denition, is the generator of the Markov process
given by (7.1). The formal L
2
adjoint operator L

v = (hv) +
1
2
(v).
7.4.2 It os Formula
The It o formula enables us to calculate the rate of change in time of functions V : Z R
n
evaluated at the solution of a Z-valued SDE. Formally, we can write:
d
dt
_
V (z(t))
_
= LV (z(t)) +
_
V (z(t)), (z(t))
dW
dt
_
.
Note that if W were a smooth time-dependent function this formula would not be correct: there is
an additional term in LV , proportional to , which arises from the lack of smoothness of Brownian
motion. The precise interpretation of the expression for the rate of change of V is in integrated
form:
Lemma 7.4.1. (It os Formula) Assume that the conditions of Theorem 7.3.2 hold. Let x(t) solve
(7.1) and let V C
2
(Z, R
n
). Then the process V (z(t)) satises
V (z(t)) = V (z(0)) +
_
t
0
LV (z(s))ds +
_
t
0
V (z(s)), (z(s)) dW(s)) .
Let : Z R and consider the function
v(z, t) = E
_
(z(t))[z(0) = z
_
, (7.14)
125
where the expectation is with respect to all Brownian driving paths. By averaging in the It o for-
mula, which removes the stochastic integral, and using the Markov property, it is possible to obtain
the Backward Kolmogorov equation.
Theorem 7.4.2. Assume that is chosen sufciently smooth so that the backward Kolmogorov
equation
v
t
= Lv for (z, t) Z (0, ),
v = for (z, t) Z 0 , (7.15)
has a unique classical solution v(x, t) C
2,1
(Z (0, ), ). Then v is given by (7.14) where z(t)
solves (7.2).
For a Stratonovich SDE the rules of standard calculus apply: Consider the Stratonovich SDE (7.29)
and let V (x) C
2
(R). Then
dV (X(t)) =
dV
dx
(X(t)) (f(X(t)) dt + (X(t)) dW(t)) .
Consider the Stratonovich SDE (7.29) on R
d
(i.e. f R
d
, : R
n
R
d
, W(t) is standard
Brownian motion on R
n
). The corresponding Fokker-Planck equation is:

t
= (f) +
1
2
( ())). (7.16)
Now we can derive rigorously the Fokker-Planck equation.
Theorem 7.4.3. Consider equation (7.2) with z(0) a random variable with density
0
(z). Assume
that the law of z(t) has a density (z, t) C
2,1
(Z (0, )). Then satises the Fokker-Planck
equation

t
= L

for (z, t) Z (0, ), (7.17a)


=
0
for z Z 0. (7.17b)
Proof. Let E

denote averaging with respect to the product measure induced by the measure
with density
0
on z(0) and the independent driving Wiener measure on the SDE itself. Averaging
126
over random z(0) distributed with density
0
(z), we nd
E

((z(t))) =
_
?
v(z, t)
0
(z) dz
=
_
?
(e
/t
)(z)
0
(z) dz
=
_
?
(e
/

0
)(z)(z) dz.
But since (z, t) is the density of z(t) we also have
E

((z(t))) =
_
?
(z, t)(z)dz.
Equating these two expressions for the expectation at time t we obtain
_
?
(e
/

0
)(z)(z) dz =
_
?
(z, t)(z) dz.
We use a density argument so that the identity can be extended to all L
2
(Z). Hence, from the
above equation we deduce that
(z, t) =
_
e
/

0
_
(z).
Differentiation of the above equation gives (7.17a). Setting t = 0 gives the initial condition (7.17b).
7.5 Linear SDEs
In this section we study linear SDEs in arbitrary nite dimensions. Let A R
nn
be a positive
denite matrix and let D > 0 be a positive constant. We will consider the SDE
dX(t) = AX(t) dt +

2DdW(t)
or, componentwise,
dX
i
(t) =
d

j=1
A
ij
X
j
(t) +

2DdW
i
(t), i = 1, . . . d.
The corresponding Fokker-Planck equation is
p
t
= (Axp) + Dp
127
or
p
t
=
d

i,j

x
i
(A
ij
x
j
p) + D
d

j=1

2
p
x
2
j
.
Let us now solve the Fokker-Planck equation with initial conditions p(x, t[x
0
, 0) = (xx
0
). We
take the Fourier transform of the Fokker-Planck equation to obtain
p
t
= Ak
k
p D[k[
2
p (7.18)
with
p(x, t[x
0
, 0) = (2)
d
_
R
d
e
ikx
p(k, t[x
0
, t) dk.
The initial condition is
p(k, 0[x
0
, 0) = e
ikx
0
(7.19)
We know that the transition probability density of a linear SDE is Gaussian. Since the Fourier
transform of a Gaussian function is also Gaussian, we look for a solution to (7.18) which is of the
form
p(k, t[x
0
, 0) = exp(ik M(t)
1
2
k
T
(t)k).
We substitute this into (7.18) and use the symmetry of A to obtain the equations
dM
dt
= AM and
d
dt
= 2A + 2DI,
with initial conditions (which follow from (11.5)) M(0) = x
0
and (0) = 0 where 0 denotes the
zero d d matrix. We can solve these equations using the spectral resolution of A = B
T
B. The
solutions are
M(t) = e
At
M(0)
and
(t) = DA
1
DA
1
e
2At
.
We calculate now the inverse Fourier transform of p to obtain the fundamental solution (Greens
function) of the Fokker-Planck equation
p(x, t[x
0
, 0) = (2)
d/2
(det((t)))
1/2
exp
_

1
2
_
x e
At
x
0
_
T

1
(t)
_
x e
At
x
0
_
_
.
(7.20)
128
We note that generator of the Markov processes X
t
is of the form
L = V (x) + D
with V (x) =
1
2
x
T
Ax =
1
2

d
i,j=1
A
ij
x
i
x
j
. This is a conning potential and from the theory
presented in Section 6.5 we know that the process X
t
is ergodic. The invariant distribution is
p
s
(x) =
1
Z
e

1
2
x
T
Ax
(7.21)
with Z =
_
R
d
e

1
2
x
T
Ax
dx = (2)
d
2
_
det(A
1
). Using the above calculations, we can calculate the
stationary autocorrelation matrix is given by the formula
E(X
T
0
X
t
) =
_ _
x
T
0
xp(x, t[x
0
, 0)p
s
(x
0
) dxdx
0
.
We substitute the formulas for the transitions probability density and the stationary distribution,
equations (7.21) and (7.20) into the above equations and do the Gaussian integration to obtain
E(X
T
0
X
t
) = DA
1
e
At
.
We use now the the variation of constants formula to obtain
X
t
= e
At
X
0
+

2D
_
t
0
e
A(ts)
dW(s).
The matrix exponential can be calculated using the spectral resolution of A:
e
At
= B
T
e
t
B.
7.6 Derivation of the Stratonovich SDE
When white noise is approximated by a smooth process this often leads to Stratonovich inter-
pretations of stochastic integrals, at least in one dimension. We use multiscale analysis (singular
perturbation theory for Markov processes) to illustrate this phenomenon in a one-dimensional ex-
ample.
Consider the equations
dx
dt
= h(x) +
1

f(x)y, (7.22a)
dy
dt
=
y

2
+
_
2D

2
dV
dt
, (7.22b)
129
with V being a standard one-dimensional Brownian motion. We say that the process x(t) is driven
by colored noise: the noise that appears in (7.22a) has non-zero correlation time. The correlation
function of the colored noise (t) := y(t)/ is (we take y(0) = 0)
R(t) = E((t)(s)) =
1

2
D

2
[ts[
.
The power spectrum of the colored noise (t) is:
f

(x) =
1

2
D
2

1
x
2
+ (
2
)
2
=
D

4
x
2
+
2

D

2
and, consequently,
lim
0
E
_
y(t)

y(s)

_
=
2D

2
(t s),
which implies the heuristic
lim
0
y(t)

=
_
2D

2
dV
dt
. (7.23)
Another way of seeing this is by solving (7.22b) for y/:
y

=
_
2D

2
dV
dt

dy
dt
. (7.24)
If we neglect the O() term on the right hand side then we arrive, again, at the heuristic (7.23).
Both of these arguments lead us to conjecture the limiting It o SDE:
dX
dt
= h(X) +
_
2D

f(X)
dV
dt
. (7.25)
In fact, as applied, the heuristic gives the incorrect limit. Whenever white noise is approximated
by a smooth process, the limiting equation should be interpreted in the Stratonovich sense, giving
dX
dt
= h(X) +
_
2D

f(X)
dV
dt
. (7.26)
This is usually called the Wong-Zakai theorem. A similar result is true in arbitrary nite and even
innite dimensions. We will show this using singular perturbation theory.
Theorem 7.6.1. Assume that the initial conditions for y(t) are stationary and that the function f
is smooth. Then the solution of eqn (7.22a) converges, in the limit as 0 to the solution of the
Stratonovich SDE (7.26).
130
Remarks 7.6.2. i. It is possible to prove pathwise convergence under very mild assumptions.
ii. The generator of a Stratonovich SDE has the from
L
strat
= h(x)
x
+
D

f(x)
x
(f(x)
x
) .
iii. Consequently, the Fokker-Planck operator of the Stratonovich SDE can be written in diver-
gence form:
L

strat
=
x
(h(x)) +
D

x
_
f
2
(x)
x

_
.
iv. In most applications in physics the white noise is an approximation of a more complicated
noise processes with non-zero correlation time. Hence, the physically correct interpretation
of the stochastic integral is the Stratonovich one.
v. In higher dimensions an additional drift term might appear due to the noncommutativity of
the row vectors of the diffusion matrix. This is related to the L evy area correction in the
theory of rough paths.
Proof of Proposition 7.6.1 The generator of the process (x(t), y(t)) is
L =
1

2
_
y
y
+ D
2
y
_
+
1

f(x)y
x
+ h(x)
x
=:
1

2
L
0
+
1

L
1
+L
2
.
The fast process is an stationary Markov process with invariant density
(y) =
_

2D
e

y
2
2D
. (7.27)
The backward Kolmogorov equation is
u

t
=
_
1

2
L
0
+
1

L
1
+L
2
_
u

. (7.28)
We look for a solution to this equation in the form of a power series expansion in :
u

(x, y, t) = u
0
+ u
1
+
2
u
2
+ . . .
131
We substitute this into (7.28) and equate terms of the same power in to obtain the following
hierarchy of equations:
L
0
u
0
= 0,
L
0
u
1
= L
1
u
0
,
L
0
u
2
= L
1
u
1
+L
2
u
0

u
0
t
.
The ergodicity of the fast process implies that the null space of the generator L
0
consists only of
constant in y. Hence:
u
0
= u(x, t).
The second equation in the hierarchy becomes
L
0
u
1
= f(x)y
x
u.
This equation is solvable since the right hand side is orthogonal to the null space of the adjoint of
L
0
(this is the Fredholm alterantive). We solve it using separation of variables:
u
1
(x, y, t) =
1

f(x)
x
uy +
1
(x, t).
In order for the third equation to have a solution we need to require that the right hand side is
orthogonal to the null space of L

0
:
_
R
_
L
1
u
1
+L
2
u
0

u
0
t
_
(y) dy = 0.
We calculate:
_
R
u
0
t
(y) dy =
u
t
.
Furthermore:
_
R
L
2
u
0
(y) dy = h(x)
x
u.
Finally
_
R
L
1
u
1
(y) dy =
_
R
f(x)y
x
_
1

f(x)
x
uy +
1
(x, t)
_
(y) dy
=
1

f(x)
x
(f(x)
x
u) y
2
) + f(x)
x

1
(x, t)y)
=
D

2
f(x)
x
(f(x)
x
u)
=
D

2
f(x)
x
f(x)
x
u +
D

2
f(x)
2

2
x
u.
132
Putting everything together we obtain the limiting backward Kolmogorov equation
u
t
=
_
h(x) +
D

2
f(x)
x
f(x)
_

x
u +
D

2
f(x)
2

2
x
u,
from which we read off the limiting Stratonovich SDE
dX
dt
= h(X) +
_
2D

f(X)
dV
dt
.
7.6.1 It o versus Stratonovich
A Stratonovich SDE
dX(t) = f(X(t)) dt + (X(t)) dW(t) (7.29)
can be written as an It o SDE
dX(t) =
_
f(X(t)) +
1
2
_

d
dx
_
(X(t))
_
dt + (X(t)) dW(t).
Conversely, and It o SDE
dX(t) = f(X(t)) dt + (X(t))dW(t) (7.30)
can be written as a Statonovich SDE
dX(t) =
_
f(X(t))
1
2
_

d
dx
_
(X(t))
_
dt + (X(t)) dW(t).
The It o and Stratonovich interpretation of an SDE can lead to equations with very different prop-
erties!
When the diffusion coefcient depends on the solution of the SDE X(t), we will say that we
have an equation with multiplicative noise .
7.7 Numerical Solution of SDEs
7.8 Parameter Estimation for SDEs
7.9 Noise Induced Transitions
Consider the Landau equation:
dX
t
dt
= X
t
(c X
2
t
), X
0
= x. (7.31)
133
This is a gradient ow for the potential V (x) =
1
2
cx
2

1
4
x
4
. When c < 0 all solutions are attracted
to the single steady state X

= 0. When c > 0 the steady state X

= 0 becomes unstable and


X
t


c if x > 0 and X
t

c if x < 0. Consider additive random perturbations to the


Landau equation:
dX
t
dt
= X
t
(c X
2
t
) +

2
dW
t
dt
, X
0
= x. (7.32)
This equation denes an ergodic Markov process on R: There exists a unique invariant distribution:
(x) = Z
1
e
V (x)/
, Z =
_
R
e
V (x)/
dx, V (x) =
1
2
cx
2

1
4
x
4
.
(x) is a probability density for all values of c R. The presence of additive noise in some
sense trivializes the dynamics. The dependence of various averaged quantities on c resembles
the physical situation of a second order phase transition.
Consider now multiplicative perturbations of the Landau equation.
dX
t
dt
= X
t
(c X
2
t
) +

2X
t
dW
t
dt
, X
0
= x. (7.33)
Where the stochastic differential is interpreted in the It o sense. The generator of this process is
L = x(c x
2
)
x
+ x
2

2
x
.
Notice that X
t
= 0 is always a solution of (7.33). Thus, if we start with x > 0 (x < 0) the solution
will remain positive (negative). We will assume that x > 0.
Consider the function Y
t
= log(X
t
). We apply It os formula to this function:
dY
t
= Llog(X
t
) dt + X
t

x
log(X
t
) dW
t
=
_
X
t
(c X
2
t
)
1
X
t
X
2
t
1
X
2
t
_
dt + X
t
1
X
t
dW
t
= (c ) dt X
2
t
dt + dW
t
.
Thus, we have been able to transform (7.33) into an SDE with additive noise:
dY
t
=
_
(c ) e
2Y
t
_
dt + dW
t
. (7.34)
This is a gradient ow with potential
V (y) =
_
(c )y
1
2
e
2y
_
.
134
The invariant measure, if it exists, is of the form
(y) dy = Z
1
e
V (y)/
dy.
Going back to the variable x we obtain:
(x) dx = Z
1
x
(c/2)
e

x
2
2
dx.
We need to make sure that this distribution is integrable:
Z =
_
+
0
x

x
2
2
< , =
c

2.
For this it is necessary that
> 1 c > .
Not all multiplicative random perturbations lead to ergodic behavior. The dependence of the in-
variant distribution on c is similar to the physical situation of rst order phase transitions.
7.10 Discussion and Bibliography
Colored Noise When the noise which drives an SDE has non-zero correlation time we will say
that we have colored noise. The properties of the SDE (stability, ergodicity etc.) are quite robust
under coloring of the noise. See
G. Blankenship and G.C. Papanicolaou, Stability and control of stochastic systems with wide-
band noise disturbances. I, SIAM J. Appl. Math., 34(3), 1978, pp. 437476. Colored noise
appears in many applications in physics and chemistry. For a review see P. Hanggi and P. Jung
Colored noise in dynamical systems. Adv. Chem. Phys. 89 239 (1995).
In the case where there is an additional small time scale in the problem, in addition to the
correlation time of the colored noise, it is not clear what the right interpretation of the stochastic
integral (in the limit as both small time scales go to 0). This is usually called the It o versus
Stratonovich problem. Consider, for example, the SDE


X =

X + v(X)

(t),
where

(t) is colored noise with correlation time


2
. In the limit where both small time scales go
to 0 we can get either It o or Stratonovich or neither. See [51, 71].
Noise induced transitions are studied extensively in [42]. The material in Section 7.9 is based
on [59]. See also [58].
135
7.11 Exercises
1. Calculate all moments of the geometric Brownian motion for the It o and Stratonovich interpre-
tations of the stochastic integral.
2. Study additive and multiplicative random perturbations of the ODE
dx
dt
= x(c + 2x
2
x
4
).
3. Analyze equation (7.33) for the Stratonovich interpretation of the stochastic integral.
136
Chapter 8
The Langevin Equation
8.1 Introduction
8.2 The Fokker-Planck Equation in Phase Space (Klein-Kramers
Equation)
Consider a diffusion process in two dimensions for the variables q (position) and momentum p.
The generator of this Markov process is
L = p
q

q
V
p
+ (p
p
+ D
p
). (8.1)
The L
2
(dpdq)-adjoint is
L

= p
q

q
V
p
+ (
p
(p) + D
p
) .
The corresponding FP equation is:
p
t
= L

p.
The corresponding stochastic differential equations is the Langevin equation

X
t
= V (X
t
)

X
t
+
_
2D

W
t
. (8.2)
This is Newtons equation perturbed by dissipation and noise. The Fokker-Planck equation for the
Langevin equation, which is sometimes called the Klein-Kramers-Chandrasekhar equation was
rst derived by Kramers in 1923 and was studied by Kramers in his famous paper [?]. Notice that
L

is not a uniformly elliptic operator: there are second order derivatives only with respect to p
and not q. This is an example of a degenerate elliptic operator. It is, however, hypoelliptic. We can
137
still prove existence, uniqueness and regularity of solutions for the Fokker-Planck equation, and
obtain estimates on the solution. It is not possible to obtain the solution of the FP equation for an
arbitrary potential. We can, however, calculate the (unique normalized) solution of the stationary
Fokker-Planck equation.
Theorem 8.2.1. Let V (x) be a smooth conning potential. Then the Markov process with genera-
tor (8.45) is ergodic. The unique invariant distribution is the Maxwell-Boltzmann distribution
(p, q) =
1
Z
e
H(p,q)
(8.3)
where
H(p, q) =
1
2
|p|
2
+ V (q)
is the Hamiltonian, = (k
B
T)
1
is the inverse temperature and the normalization factor Z is
the partition function
Z =
_
R
2d
e
H(p,q)
dpdq.
It is possible to obtain rates of convergence in either a weighted L
2
-norm or the relative entropy
norm.
H(p(, t)[) Ce
t
.
The proof of this result is very complicated, since the generator Lis degenerate and non-selfadjoint.
See for example and the references therein.
Let (q, p, t) be the solution of the Kramers equation and let

(q, p) be the Maxwell-Boltzmann


distribution. We can write
(q, p, t) = h(q, p, t)

(q, p),
where h(q, p, t) solves the equation
h
t
= /h + Sh (8.4)
where
/ = p
q

q
V
p
, o = p
p
+
1

p
.
The operator / is antisymmetric in L
2

:= L
2
(R
2d
;

(q, p)), whereas o is symmetric.


Let X
i
:=

p
i
. The L
2

-adjoint of X
i
is
X

i
= p
i
+

p
i
.
138
We have that
o =
1
d

i=1
X

i
X
i
.
Consequently, the generator of the Markov process q(t), p(t) can be written in H ormanders
sum of squares form:
L = /+
1
d

i=1
X

i
X
i
. (8.5)
We calculate the commutators between the vector elds in (8.5):
[/, X
i
] =

q
i
, [X
i
, X
j
] = 0, [X
i
, X

j
] =
ij
.
Consequently,
Lie(X
1
, . . . X
d
, [/, X
1
], . . . [/, X
d
]) = Lie(
p
,
q
)
which spans T
p,q
R
2d
for all p, q R
d
. This shows that the generator L is a hypoelliptic operator.
Let now Y
i
=

p
i
with L
2

-adjoint Y

i
=

q
i

V
q
i
. We have that
X

i
Y
i
Y

i
X
i
=
_
p
i

q
i

V
q
i

p
i
_
.
Consequently, the generator can be written in the form
L =
1
d

i=1
(X

i
Y
i
Y

i
X
i
+ X

i
X
i
) . (8.6)
Notice also that
L
V
:=
q
V
q
+
1

q
=
1
d

i=1
Y

i
Y
i
.
The phase-space Fokker-Planck equation can be written in the form

t
+ p
q

q
V
p
= Q(, f
B
)
where the collision operator has the form
Q(, f
B
) = D
_
f
B

_
f
1
B

__
.
The Fokker-Planck equation has a similar structure to the Boltzmann equation (the basic equation
in the kinetic theory of gases), with the difference that the collision operator for the FP equation is
139
linear. Convergence of solutions of the Boltzmann equation to the Maxwell-Boltzmann distribution
has also been proved. See ??.
We can study the backward and forward Kolmogorov equations for (9.11) by expanding the
solution with respect to the Hermite basis. We consider the problem in 1d. We set D = 1. The
generator of the process is:
L = p
q
V
t
(q)
p
+
_
p
p
+
2
p
_
.
=: L
1
+ L
0
,
where
L
0
:= p
p
+
2
p
and L
1
:= p
q
V
t
(q)
p
.
The backward Kolmogorov equation is
h
t
= Lh. (8.7)
The solution should be an element of the weighted L
2
-space
L
2

=
_
f[
_
R
2
[f[
2
Z
1
e
H(p,q)
dpdq <
_
.
We notice that the invariant measure of our Markov process is a product measure:
e
H(p,q)
= e

1
2
[p[
2
e
V (q)
.
The space L
2
(e

1
2
[p[
2
dp) is spanned by the Hermite polynomials. Consequently, we can expand
the solution of (8.7) into the basis of Hermite basis:
h(p, q, t) =

n=0
h
n
(q, t)f
n
(p), (8.8)
where f
n
(p) = 1/

n!H
n
(p). Our plan is to substitute (8.8) into (8.7) and obtain a sequence of
equations for the coefcients h
n
(q, t). We have:
L
0
h = L
0

n=0
h
n
f
n
=

n=0
nh
n
f
n
Furthermore
L
1
h =
q
V
p
h + p
q
h.
140
We calculate each term on the right hand side of the above equation separately. For this we will
need the formulas

p
f
n
=

nf
n1
and pf
n
=

nf
n1
+

n + 1f
n+1
.
p
q
h = p
q

n=0
h
n
f
n
= p
p
h
0
+

n=1

q
h
n
pf
n
=
q
h
0
f
1
+

n=1

q
h
n
_

nf
n1
+

n + 1f
n+1
_
=

n=0
(

n + 1
q
h
n+1
+

n
q
h
n1
)f
n
with h
1
0. Furthermore

q
V
p
h =

n=0

q
V h
n

p
f
n
=

n=0

q
V h
n

nf
n1
=

n=0

q
V h
n+1

n + 1f
n
.
Consequently:
Lh = L
1
+ L
1
h
=

n=0
_
nh
n
+

n + 1
q
h
n+1
+

n
q
h
n1
+

n + 1
q
V h
n+1
_
f
n
Using the orthonormality of the eigenfunctions of L
0
we obtain the following set of equations
which determine h
n
(q, t)

n=0
.

h
n
= nh
n
+

n + 1
q
h
n+1
+

n
q
h
n1
+

n + 1
q
V h
n+1
, n = 0, 1, . . .
This is set of equations is usually called the Brinkman hierarchy (1956). We can use this approach
to develop a numerical method for solving the Klein-Kramers equation. For this we need to expand
each coefcient h
n
in an appropriate basis with respect to q. Obvious choices are other the Hermite
basis (polynomial potentials) or the standard Fourier basis (periodic potentials). We will do this
141
for the case of periodic potentials. The resulting method is usually called the continued fraction
expansion. See [82]. The Hermite expansion of the distribution function wrt to the velocity is used
in the study of various kinetic equations (including the Boltzmann equation). It was initiated by
Grad in the late 40s. It quite often used in the approximate calculation of transport coefcients (e.g.
diffusion coefcient). This expansion can be justied rigorously for the Fokker-Planck equation.
See [67]. This expansion can also be used in order to solve the Poisson equation L = f(p, q).
See [73].
8.3 The Langevin Equation in a Harmonic Potential
There are very few potentials for which we can solve the Langevin equation or to calculate the
eigenvalues and eigenfunctions of the generator of the Markov process q(t), p(t). One case
where we can calculate everything explicitly is that of a Brownian particle in a quadratic (har-
monic) potential
V (q) =
1
2

2
0
q
2
. (8.9)
The Langevin equation is
q =
2
0
q q +
_
2
1
W (8.10)
or
q = p, p =
2
0
q p +
_
2
1
W. (8.11)
This is a linear equation that can be solved explicitly. Rather than doing this, we will calculate the
eigenvalues and eigenfunctions of the generator, which takes the form
L = p
q

2
0
q
p
+ (p
p
+
1

2
p
). (8.12)
The Fokker-Planck operator is
L = p
q

2
0
q
p
+ (p
p
+
1

2
p
). (8.13)
The process q(t), p(t) is an ergodic Markov process with Gaussian invariant measure

(q, p) dqdp =

0
2
e

2
p
2

2
0
q
2
. (8.14)
142
For the calculation of the eigenvalues and eigenfunctions of the operator L it is convenient to
introduce creation and annihilation operator in both the position and momentum variables. We set
a

=
1/2

p
, a
+
=
1/2

p
+
1/2
p (8.15)
and
b

=
1
0

1/2

q
, b
+
=
1
0

1/2

q
+
0

1/2
p. (8.16)
We have that
a
+
a

=
1

2
p
+ p
p
and
b
+
b

=
1

2
q
+ q
q
Consequently, the operator

L = a
+
a

b
+
b

(8.17)
is the generator of the OU process in two dimensions.
The operators a

, b

satisfy the commutation relations


[a
+
, a

] = 1, (8.18a)
[b
+
, b

] = 1, (8.18b)
[a

, b

] = 0. (8.18c)
See Exercise 3. Using now the operators a

and b

we can write the generator L in the form


L = a
+
a

0
(b
+
a

a
+
b

), (8.19)
which is a particular case of (8.6). In order to calculate the eigenvalues and eigenfunctions of (8.19)
we need to make an appropriate change of variables in order to bring the operator L into the
decoupled form (8.17). Clearly, this is a linear transformation and can be written in the form
Y = AX
where X = (q, p) for some 2 2 matrix A. It is somewhat easier to make this change of variables
at the level of the creation and annihilation operators. In particular, our goal is to nd rst order
differential operators c

and d

so that the operator (8.19) becomes


L = Cc
+
c

Dd
+
d

(8.20)
143
for some appropriate constants C and D. Since our goal is, essentially, to map L to the two-
dimensional OU process, we require that that the operators c

and d

satisfy the canonical com-


mutation relations
[c
+
, c

] = 1, (8.21a)
[d
+
, d

] = 1, (8.21b)
[c

, d

] = 0. (8.21c)
The operators c

and d

should be given as linear combinations of the old operators a

and b

.
From the structure of the generator L (8.19), the decoupled form (8.20) and the commutation
relations (8.21) and (8.18) we conclude that c

and d

should be of the form


c
+
=
11
a
+
+
12
b
+
, (8.22a)
c

=
21
a

+
22
b

, (8.22b)
d
+
=
11
a
+
+
12
b
+
, (8.22c)
d

=
21
a

+
22
b

. (8.22d)
Notice that the c and d

are not the adjoints of c


+
and d
+
. If we substitute now these equations
into (8.20) and equate it with (8.19) and into the commutation relations (8.21) we obtain a sys-
tem of equations for the coefcients
ij
,
ij
. In order to write down the formulas for these
coefcients it is convenient to introduce the eigenvalues of the deterministic problem
q = q
2
0
q.
The solution of this equation is
q(t) = C
1
e

1
t
+ C
2
e

2
t
with

1,2
=

2
, =
_

2
4
2
0
. (8.23)
The eigenvalues satisfy the relations

1
+
2
= ,
1

2
= ,
1

2
=
2
0
. (8.24)
144
Proposition 8.3.1. Let L be the generator (8.19) and let c

, d
pm
be the operators
c
+
=
1

_
_

1
a
+
+
_

2
b
+
_
, (8.25a)
c

=
1

_
_

1
a

2
b

_
, (8.25b)
d
+
=
1

_
_

2
a
+
+
_

1
b
+
_
, (8.25c)
d

=
1

2
a

+
_

1
b

_
. (8.25d)
Then c

, d

satisfy the canonical commutation relations (8.21) as well as


[L, c

] =
1
c

, [L, d

] =
2
d

. (8.26)
Furthermore, the operator L can be written in the form
L =
1
c
+
c

2
d
+
d

. (8.27)
Proof. rst we check the commutation relations:
[c
+
, c

] =
1

1
[a
+
, a

]
2
[b
+
, b

]
_
=
1

(
1
+
2
) = 1.
Similarly,
[d
+
, d

] =
1

2
[a
+
, a

] +
1
[b
+
, b

]
_
=
1

(
2

1
) = 1.
Clearly, we have that
[c
+
, d
+
] = [c

, d

] = 0.
Furthermore,
[c
+
, d

] =
1

2
[a
+
, a

] +
_

2
[b
+
, b

]
_
=
1

(
_

2
+
_

2
) = 0.
145
Finally:
[L, c
+
] =
1
c
+
c

c
+
+
1
c
+
c
+
c

=
1
c
+
(1 + c
+
c

) +
1
c
+
c
+
c

=
1
c
+
(1 + c
+
c

) +
1
c
+
c
+
c

=
1
c
+
,
and similarly for the other equations in (8.26). Now we calculate
L =
1
c
+
c

2
d
+
d

2
2

2
1

a
+
a

+ 0b
+
b

(
1

2
)a
+
b

+
1

2
(
1
+
2
)b
+
a

= a
+
a

0
(b
+
a

a
+
b

),
which is precisely (8.19). In the above calculation we used (8.24).
Using now (8.27) we can readily obtain the eigenvalues and eigenfunctions of L. From our
experience with the two-dimensional OU processes (or, the Schr odinger operator for the two-
dimensional quantum harmonic oscillator), we expect that the eigenfunctions should be tensor
products of Hermite polynomials. Indeed, we have the following, which is the main result of this
section.
Theorem8.3.2. The eigenvalues and eigenfunctions of the generator of the Markov process q, p (8.11)
are

nm
=
1
n +
2
m =
1
2
(n + m) +
1
2
(n m), n, m = 0, 1, . . . (8.28)
and

nm
(q, p) =
1

n!m!
(c
+
)
n
(d
+
)
m
1, n, m = 0, 1, . . . (8.29)
Proof. We have
[L, (c
+
)
2
] = L(c
+
)
2
(c
+
)
2
L
= (c
+
L
1
c
+
)c
+
c
+
(Lc
+
+
1
c
+
)
= 2
1
(c
+
)
2
and similarly [L, (d
+
)
2
] = 2
1
(c
+
)
2
. A simple induction argument now shows that (see Exer-
cise 8.3.3)
[L, (c
+
)
n
] = n
1
(c
+
)
n
and [L, (d
+
)
m
] = m
1
(d
+
)
m
. (8.30)
146
We use (8.30) to calculate
L(c
+
)
n
(d
+
)
n
1
= (c
+
)
n
L(d
+
)
m
1 n
1
(c
+
)
n
(d
+
m)1
= (c
+
)
n
(d
+
)
m
L1 m
2
(c
+
)
n
(d
+
m)1 n
1
(c
+
)
n
(d
+
m)1
= n
1
(c
+
)
n
(d
+
m)1 m
2
(c
+
)
n
(d
+
m)1
from which (8.28) and (8.29) follow.
Exercise 8.3.3. Show that
[L, (c

)
n
] = n
1
(c

)
n
, [L, (d

)
n
] = n
1
(d

)
n
, [c

, (c
+
)
n
] = n(c
+
)
n1
, [d

, (d
+
)
n
] = n(d
+
)
n1
.
(8.31)
Remark 8.3.4. In terms of the operators a

, b

the eigenfunctions of L are

nm
=

n!m!

n+m
2

n/2
1

m/2
2
n

=0
m

k=0
1
k!(mk)!!(n )!
_

2
_k
2
(a
+
)
n+mk
(b
+
)
+k
1.
The rst few eigenfunctions are

00
= 1.

10
=

1
p +

0
q
_

01
=

2
p +

0
q
_

11
=
2

2
+

1
p
2

2
+ p
1

0
q +
0
q
2
p +

0
2
q
2

20
=

1
+ p
2

1
+ 2

2
p

0
q
2
+
0
2
q
2

2
.

02
=

2
+ p
2

2
+ 2

2
p

0
q
1
+
0
2
q
2

2
.
147
Notice that the eigenfunctions are not orthonormal.
As we already know, the rst eigenvalue, corresponding to the constant eigenfunction, is 0:

00
= 0.
Notice that the operator L is not self-adjoint and consequently, we do not expect its eigenvalues
to be real. Indeed, whether the eigenvalues are real or not depends on the sign of the discriminant
=
2
4
2
0
. In the underdamped regime, < 2
0
the eigenvalues are complex:

nm
=
1
2
(n + m) +
1
2
i
_

2
+ 4
2
0
(n m), < 2
0
.
This it to be expected, since the underdamped regime the dynamics is dominated by the deter-
ministic Hamiltonian dynamics that give rise to the antisymmetric Liouville operator. We set
=
_
(4
2
0

2
), i.e. = 2i. The eigenvalues can be written as

nm
=

2
(n + m) + i(n m).
In Figure 8.3 we present the rst few eigenvalues of L in the underdamped regime. The eigen-
values are contained in a cone on the right half of the complex plane. The cone is determined
by

n0
=

2
n + in and
0m
=

2
mim.
The eigenvalues along the diagonal are real:

nn
= n.
On the other hand, in the overdamped regime, 2
0
all eigenvalues are real:

nm
=
1
2
(n + m) +
1
2
_

2
4
2
0
(n m), 2
0
.
In fact, in the overdamped limit +(which we will study in Chapter ??), the eigenvalues of
the generator L converge to the eigenvalues of the generator of the OU process:

nm
= n +

2
0

(n m) + O(
3
).
This is consistent with the fact that in this limit the solution of the Langevin equation converges to
the solution of the OU SDE. See Chapter ?? for details.
148
Figure 8.1: First few eigenvalues of L for = = 1.
149
The eigenfunctions of L do not form an orthonormal basis in L
2

:= L
2
(R
2
, Z
1
e
H
) since L
is not a selfadjoint operator. Using the eigenfunctions/eigenvalues of L we can easily calculate the
eigenfunctions/eigenvalues of the L
2

adjoint of L. From the calculations presented in Section 8.2


we know that the adjoint operator is

L := /+ S (8.32)
=
0
(b
+
a

a
+
) + a
+
a

(8.33)
=
1
(c

(c
+
)

2
(d

) (d
+
), (8.34)
where
(c
+
)

=
1

_
_

1
a

+
_

2
b

_
, (8.35a)
(c

=
1

_
_

1
a
+

2
b
+
_
, (8.35b)
(d
+
)

=
1

_
_

2
a

+
_

1
b

_
, (8.35c)
(d

=
1

2
a
+
+
_

1
b
+
_
. (8.35d)

L has the same eigenvalues as L:

L
nm
=
nm

nm
,
where
nm
are given by (8.28). The eigenfunctions are

nm
=
1

n!m!
((c

)
n
((d

)
m
1. (8.36)
Proposition 8.3.5. The eigenfunctions of L and

L satisfy the biorthonormality relation
_ _

nm

dpdq =
n

mk
. (8.37)
Proof. We will use formulas (8.31). Notice that using the third and fourth of these equations
together with the fact that c

1 = d

1 = 0 we can conclude that (for n )


(c

(c
+
)
n
1 = n(n 1) . . . (n + 1)(c
+
)
n
. (8.38)
We have
_ _

nm

dpdq =
1

n!m!!k!
_ _
((c
+
))
n
((d
+
))
m
1((c

((d

)
k
1

dpdq
=
n(n 1) . . . (n + 1)m(m1) . . . (mk + 1)

n!m!!k!
_ _
((c
+
))
n
((d
+
))
mk
1

dpdq
=
n

mk
,
150
since all eigenfunctions average to 0 with respect to

.
From the eigenfunctions of

L we can obtain the eigenfunctions of the Fokker-Planck operator.
Using the formula (see equation (8.4))
L

(f

) =

Lf
we immediately conclude that the the Fokker-Planck operator has the same eigenvalues as those of
L and

L. The eigenfunctions are

nm
=

nm
=

n!m!
((c

)
n
((d

)
m
1. (8.39)
8.4 Asymptotic Limits for the Langevin Equation
There are very few SDEs/Fokker-Planck equations that can be solved explicitly. In most cases
we need to study the problem under investigation either approximately or numerically. In this
part of the course we will develop approximate methods for studying various stochastic systems
of practical interest. There are many problems of physical interest that can be analyzed using
techniques from perturbation theory and asymptotic analysis:
i. Small noise asymptotics at nite time intervals.
ii. Small noise asymptotics/large times (rare events): the theory of large deviations, escape from
a potential well, exit time problems.
iii. Small and large friction asymptotics for the Fokker-Planck equation: The FreidlinWentzell
(underdamped) and Smoluchowski (overdamped) limits.
iv. Large time asymptotics for the Langevin equation in a periodic potential: homogenization
and averaging.
v. Stochastic systems with two characteristic time scales: multiscale problems and methods.
We will study various asymptotic limits for the Langevin equation (we have set m = 1)
q = V (q) q +
_
2
1
W. (8.40)
151
There are two parameters in the problem, the friction coefcient and the inverse temperature .
We want to study the qualitative behavior of solutions to this equation (and to the corresponding
Fokker-Planck equation). There are various asymptotic limits at which we can eliminate some of
the variables of the equation and obtain a simpler equation for fewer variables. In the large temper-
ature limit, 1, the dynamics of (9.11) is dominated by diffusion: the Langevin equation (9.11)
can be approximated by free Brownian motion:
q =
_
2
1
W.
The small temperature asymptotics, 1 is much more interesting and more subtle. It leads
to exponential, Arrhenius type asymptotics for the reaction rate (in the case of a particle escaping
from a potential well due to thermal noise) or the diffusion coefcient (in the case of a particle
moving in a periodic potential in the presence of thermal noise)
= exp (E
b
) , (8.41)
where can be either the reaction rate or the diffusion coefcient. The small temperature asymp-
totics will be studied later for the case of a bistable potential (reaction rate) and for the case of a
periodic potential (diffusion coefcient).
Assuming that the temperature is xed, the only parameter that is left is the friction coefcient
. The large and small friction asymptotics can be expressed in terms of a slow/fast system of
SDEs. In many applications (especially in biology) the friction coefcient is large: 1. In
this case the momentum is the fast variable which we can eliminate to obtain an equation for the
position. This is the overdamped or Smoluchowski limit. In various problems in physics the
friction coefcient is small: 1. In this case the position is the fast variable whereas the energy
is the slow variable. We can eliminate the position and obtain an equation for the energy. This is
the underdampled or Freidlin-Wentzell limit. In both cases we have to look at sufciently long
time scales.
We rescale the solution to (9.11):
q

(t) =

(t/

).
This rescaled process satises the equation
q

q
V (q

+
_
2
2


1
W, (8.42)
152
Different choices for these two parameters lead to the overdamped and underdamped limits:

=
1,

=
1
, 1. In this case equation (8.42) becomes

2
q

=
q
V (q

) q

+
_
2
1
W. (8.43)
Under this scaling, the interesting limit is the overdamped limit, 1. We will see later that in
the limit as +the solution to (8.43) can be approximated by the solution to
q =
q
V +
_
2
1
W.

= 1,

= , 1:
q

=
2
V (q

) q

+
_
2
2

1
W. (8.44)
Under this scaling the interesting limit is the underdamped limit, 1. We will see later that in
the limit as 0 the energy of the solution to (8.44) converges to a stochastic process on a graph.
8.4.1 The Overdamped Limit
We consider the rescaled Langevin equation (8.43):

2
q

(t) = V (q

(t)) q

(t) +
_
2
1
W(t), (8.45)
where we have set
1
= , since we are interested in the limit , i.e. 0. We will show
that, in the limit as 0, q

(t), the solution of the Langevin equation (8.45), converges to q(t),


the solution of the Smoluchowski equation
q = V +
_
2
1
W. (8.46)
We write (8.45) as a system of SDEs:
q =
1

p, (8.47)
p =
1

V (q)
1

2
p +
_
2

W. (8.48)
This systems of SDEs dened a Markov process in phase space. Its generator is
L

=
1

2
_
p
p
+
1

_
+
1

_
p
q

q
V
p
_
=:
1

2
L
0
+
1

L
1
.
153
This is a singularly perturbed differential operator. We will derive the Smoluchowski equation (8.46)
using a pathwise technique, as well as by analyzing the corresponding Kolmogorov equations.
We apply It os formula to p:
dp(t) = L

p(t) dt +
1

_
2
1

p
p(t) dW
=
1

2
p(t) dt
1

q
V (q(t)) dt +
1

_
2
1
dW.
Consequently:
1

_
t
0
p(s) ds =
_
t
0

q
V (q(s)) ds +
_
2
1
W(t) +O().
From equation (8.47) we have that
q(t) = q(0) +
1

_
t
0
p(s) ds.
Combining the above two equations we deduce
q(t) = q(0)
_
t
0

q
V (q(s)) ds +
_
2
1
W(t) +O()
from which (8.46) follows.
Notice that in this derivation we assumed that
E[p(t)[
2
C.
This estimate is true, under appropriate assumptions on the potential V (q) and on the initial con-
ditions. In fact, we can prove a pathwise approximation result:
_
E sup
t[0,T]
[q

(t) q(t)[
p
_
1/p
C
2
,
where > 0, arbitrary small (it accounts for logarithmic corrections).
The pathwise derivation of the Smoluchowski equation implies that the solution of the Fokker-
Planck equation corresponding to the Langevin equation (8.45) converges (in some appropriate
sense to be explained below) to the solution of the Fokker-Planck equation corresponding to the
Smoluchowski equation (8.46). It is important in various applications to calculate corrections to the
limiting Fokker-Planck equation. We can accomplish this by analyzing the Fokker-Planck equation
154
for (8.45) using singular perturbation theory. We will consider the problem in one dimension. This
mainly to simplify the notation. The multidimensional problem can be treated in a very similar
way.
The FokkerPlanck equation associated to equations (8.47) and (8.48) is

t
= L

=
1

(p
q
+
q
V (q)
p
) +
1

2
_

p
(p) +
1

2
p

_
=:
_
1

2
L

0
+
1

1
_
. (8.49)
The invariant distribution of the Markov process q, p, if it exists, is

(p, q) =
1
Z
e
H(p,q)
, Z =
_
R
2
e
H(p,q)
dpdq,
where H(p, q) =
1
2
p
2
+ V (q). We dene the function f(p,q,t) through
(p, q, t) = f(p, q, t)

(p, q). (8.50)


Proposition 8.4.1. The function f(p, q, t) dened in (8.50) satises the equation
f
t
=
_
1

2
_
p
q
+
1

2
p
_

(p
q

q
V (q)
p
)
_
f
=:
_
1

2
L
0

L
1
_
f. (8.51)
Remark 8.4.2. This is almost the backward Kolmogorov equation with the difference that we
have L
1
instead of L
1
. This is related to the fact that L
0
is a symmetric operator in L
2
(R
2
; Z
1
e
H(p,q)
),
whereas L
1
is antisymmetric.
Proof. We note that L

0
= 0 and L

0
= 0. We use this to calculate:
L

0
= L
0
(f
0
) =
p
(f
0
) +
1

2
p
(f
0
)
=
0
p
p
f +
0

2
p
f + fL

0
+ 2
1

p
f
p

0
=
_
p
p
f +
1

2
p
f
_

0
=
0
L
0
f.
Similarly,
L

1
= L

1
(f
0
) = (p
q
+
q
V
p
) (f
0
)
=
0
(p
q
f +
q
V
p
f) =
0
L
1
f.
155
Consequently, the FokkerPlanck equation (8.94b) becomes

0
f
t
=
0
_
1

2
L
0
f
1

L
1
f
_
,
from which the claim follows.
We will assume that the initial conditions for (8.51) depend only on q:
f(p, q, 0) = f
ic
(q). (8.52)
Another way for stating this assumption is the following: Let H = L
2
(R
2d
;

(p, q)) and dene


the projection operator P : H L
2
(R
d
;

(q)) with

(q) =
1
Z
q
e
V (q)
, Z
q
=
_
R
d
e
V (q)
dq:
P :=
1
Z
p
_
R
d
e

|p|
2
2
dp, (8.53)
with Z
p
:=
_
R
d
e
[p[
2
/2
dp. Then, assumption (11.5) can be written as
Pf
ic
= f
ic
.
We look for a solution to (8.51) in the form of a truncated power series in :
f(p, q, t) =
N

n=0

n
f
n
(p, q, t). (8.54)
We substitute this expansion into eqn. (8.51) to obtain the following system of equations.
L
0
f
0
= 0, (8.55a)
L
0
f
1
= L
1
f
0
, (8.55b)
L
0
f
2
= L
1
f
1

f
0
t
(8.55c)
L
0
f
n
= L
1
f
n1

f
n2
t
, n = 3, 4 . . . N. (8.55d)
The null space of L
0
consists of constants in p. Consequently, from equation (8.55a) we conclude
that
f
0
= f(q, t).
Now we can calculate the right hand side of equation (8.55b):
L
1
f
0
= p
q
f.
156
Equation (8.55b) becomes:
L
0
f
1
= p
q
f.
The right hand side of this equation is orthogonal to ^(L

0
) and consequently there exists a unique
solution. We obtain this solution using separation of variables:
f
1
= p
q
f +
1
(q, t).
Now we can calculate the RHS of equation (8.55c). We need to calculate L
1
f
1
:
L
1
f
1
=
_
p
q

q
V
p
__
p
q
f
1
(q, t)
_
= p
2

2
q
f p
q

q
V
q
f.
The solvability condition for (8.55c) is
_
R
_
L
1
f
1

f
0
t
_

OU
(p) dp = 0,
from which we obtain the backward Kolmogorov equation corresponding to the Smoluchowski
SDE:
f
t
=
q
V
q
f +
1

2
q
f, (8.56)
together with the initial condition (11.5).
Now we solve the equation for f
2
. We use (8.56) to write (8.55c) in the form
L
0
f
2
=
_

1
p
2
_

2
q
f + p
q

1
.
The solution of this equation is
f
2
(p, q, t) =
1
2

2
q
f(p, q, t)p
2

1
(q, t)p +
2
(q, t).
Now we calculate the right hand side of the equation for f
3
, equation (8.55d) with n = 3. First we
calculate
L
1
f
2
=
1
2
p
3

3
q
f p
2

2
q

1
+ p
q

q
V
2
q
fp
q
V
q

1
.
The solvability condition
_
R
_

1
t
+L
1
f
2
_

OU
(p) dp = 0.
This leads to the equation

1
t
=
q
V
q

1
+
1

2
q

1
,
157
together with the initial condition
1
(q, 0) = 0. From the calculations presented in the proof of
Theorem 6.5.5, and using Poincare` es inequality for the measure
1
Z
q
e
V (q)
, we deduce that
1
2
d
dt
|
1
|
2
C|
1
|
2
.
We use Gronwalls inequality now to conclude that

1
0.
Putting everything together we obtain the rst two terms in the -expansion of the FokkerPlanck
equation (8.51):
(p, q, t) = Z
1
e
H(p,q)
_
f + (p
q
f) +O(
2
)
_
,
where f is the solution of (8.56). Notice that we can rewrite the leading order term to the expansion
in the form
(p, q, t) = (2
1
)

1
2
e
p
2
/2

V
(q, t) +O(),
where
V
= Z
1
e
V (q)
f is the solution of the Smoluchowski Fokker-Planck equation

V
t
=
q
(
q
V
V
) +
1

2
q

V
.
It is possible to expand the n-th term in the expansion (8.54) in terms of Hermite functions (the
eigenfunctions of the generator of the OU process)
f
n
(p, q, t) =
n

k=0
f
nk
(q, t)
k
(p), (8.57)
where
k
(p) is the kth eigenfunction of L
0
:
L
0

k
=
k

k
.
We can obtain the following system of equations (

L =
1

q

q
V ):

Lf
n1
= 0,

k + 1

Lf
n,k+1
+
_
k
1

q
f
n,k1
= kf
n+1,k
, k = 1, 2 . . . , n 1,
_
n
1

q
f
n,n1
= nf
n+1,n
,
_
(n + 1)
1

q
f
n,n
= (n + 1)f
n+1,n+1
.
158
Using this method we can obtain the rst three terms in the expansion:
(x, y, t) =
0
(p, q)
_
f + (
_

q
f
1
) +
2
_

2
q
f
2
+ f
20
_
+
3
_

3
3!

3
q
f
3
+
_

1
L
2
q
f
_

q
f
20
_

1
__
+O(
4
),
8.4.2 The Underdamped Limit
Consider now the rescaling
,
= 1,
,
= . The Langevin equation becomes
q

=
2
V (q

) q

+
_
2
2

1
W. (8.58)
We write equation (8.58) as system of two equations
q

=
1
p

, p

=
1
V
t
(q

) p

+
_
2
1
W.
This is the equation for an O(1/) Hamiltonian system perturbed by O(1) noise. We expect that,
to leading order, the energy is conserved, since it is conserved for the Hamiltonian system. We
apply It os formula to the Hamiltonian of the system to obtain

H =
_

1
p
2
_
+
_
2
1
p
2
W
with p
2
= p
2
(H, q) = 2(H V (q)).
Thus, in order to study the 0 limit we need to analyze the following fast/slow system of
SDEs

H =
_

1
p
2
_
+
_
2
1
p
2
W (8.59a)
p

=
1
V
t
(q

) p

+
_
2
1
W. (8.59b)
The Hamiltonian is the slow variable, whereas the momentum (or position) is the fast variable.
Assuming that we can average over the Hamiltonian dynamics, we obtain the limiting SDE for the
Hamiltonian:

H =
_

1
p
2
)
_
+
_
2
1
p
2
)

W. (8.60)
The limiting SDE lives on the graph associated with the Hamiltonian system. The domain of
denition of the limiting Markov process is dened through appropriate boundary conditions (the
gluing conditions) at the interior vertices of the graph.
159
We identify all points belonging to the same connected component of the a level curve x :
H(x) = H, x = (q, p). Each point on the edges of the graph correspond to a trajectory. Interior
vertices correspond to separatrices. Let I
i
, i = 1, . . . d be the edges of the graph. Then (i, H)
denes a global coordinate system on the graph.
We will study the small asymptotics by analyzing the corresponding backward Kolmogorov
equation using singular perturbation theory. The generator of the process q

, p

is
L

=
1
(p
q

q
V
p
) p
p
+
1

2
p
=
1
L
0
+L
1
.
Let u

= E(f(p

(p, q; t), q

(p, q; t))). It satises the backward Kolmogorov equation associated


to the process q

, p

:
u

t
=
_
1

L
0
+L
1
_
u

. (8.61)
We look for a solution in the form of a power series expansion in :
u

= u
0
+ u
1
+
2
u
2
+ . . .
We substitute this ansatz into (8.61) and equate equal powers in to obtain the following sequence
of equations:
L
0
u
0
= 0, (8.62a)
L
0
u
1
= L
1
u
1
+
u
0
t
, (8.62b)
L
0
u
2
= L
1
u
1
+
u
1
t
. (8.62c)
. . . . . . . . .
Notice that the operator L
0
is the backward Liouville operator of the Hamiltonian system with
Hamiltonian
H =
1
2
p
2
+ V (q).
We assume that there are no integrals of motion other than the Hamiltonian. This means that the
null space of L
0
consists of functions of the Hamiltonian:
^(L
0
) =
_
functions ofH
_
. (8.63)
160
Let us now analyze equations (8.62). We start with (8.62a); eqn. (8.63) implies that u
0
depends on
q, p through the Hamiltonian function H:
u
0
= u(H(p, q), t) (8.64)
Now we proceed with (8.62b). For this we need to nd the solvability condition for equations of
the form
L
0
u = f (8.65)
My multiply it by an arbitrary smooth function of H(p, q), integrate over R
2
and use the skew-
symmetry of the Liouville operator L
0
to deduce:
1
_
R
2
L
0
uF(H(p, q)) dpdq =
_
R
2
uL

0
F(H(p, q)) dpdq
=
_
R
2
u(L
0
F(H(p, q))) dpdq
= 0, F C

b
(R).
This implies that the solvability condition for equation (8.83) is that
_
R
2
f(p, q)F(H(p, q)) dpdq = 0, F C

b
(R). (8.66)
We use the solvability condition in (8.62b) to obtain that
_
R
2
_
L
1
u
1

u
0
t
_
F(H(p, q)) dpdq = 0, (8.67)
To proceed, we need to understand how L
1
acts to functions of H(p, q). Let = (H(p, q)). We
have that

p
=
H
p

H
= p

H
and

p
2
=

p
_

H
_
=

H
+ p
2

2

H
2
.
The above calculations imply that, when L
1
acts on functions = (H(p, q)), it becomes
L
1
=
_
(
1
p
2
)
H
+
1
p
2

2
H
_
, (8.68)
1
We assume that both u
1
and F decay to 0 as [p[ to justify the integration by parts that follows.
161
where
p
2
= p
2
(H, q) = 2(H V (q)).
We want to change variables in the integral (8.67) and go from (p, q) to p, H. The Jacobian of the
transformation is:
(p, q)
(H, q)
=
p
H
p
q
q
H
q
q
=
p
H
=
1
p(H, q)
.
We use this, together with (8.68), to rewrite eqn. (8.67) as
_ _
_
u
t
+
_
(
1
p
2
)
H
+
1
p
2

2
H
_
u
_
F(H)p
1
(H, q) dHdq = 0.
We introduce the notation
) :=
_
dq.
The integration over q can be performed explicitly:
_
_
u
t
p
1
) +
_
(
1
p
1
) p))
H
+
1
p)
2
H
_
u
_
F(H) dH = 0.
This equation should be valid for every smooth function F(H), and this requirement leads to the
differential equation
p
1
)
u
t
=
_

1
p
1
) p)
_

H
u +p)
1

2
H
u,
or,
u
t
=
_

1
p
1
)
1
p)
_

H
u + p
1
)
1
p)
1

2
H
u.
Thus, we have obtained the limiting backward Kolmogorov equation for the energy, which is the
slow variable. From this equation we can read off the limiting SDE for the Hamiltonian:

H = b(H) + (H)

W (8.69)
where
b(H) =
1
p
1
)
1
p), (H) =
1
p
1
)
1
p).
Notice that the noise that appears in the limiting equation (8.69) is multiplicative, contrary to
the additive noise in the Langevin equation.
As it well known from classical mechanics, the action and frequency are dened as
I(E) =
_
p(q, E) dq
162
and
(E) = 2
_
dI
dE
_
1
,
respectively. Using the action and the frequency we can write the limiting FokkerPlanck equation
for the distribution function of the energy in a very compact form.
Theorem 8.4.3. The limiting FokkerPlanck equation for the energy distribution function (E, t)
is

t
=

E
__
I(E) +
1

E
__
(E)
2
__
. (8.70)
Proof. We notice that
dI
dE
=
_
p
E
dq =
_
p
1
dq
and consequently
p
1
)
1
=
(E)
2
.
Hence, the limiting FokkerPlanck equation can be written as

t
=

E
__

1
I(E)(E)
2
_

_
+
1

2
E
2
_
I
2
_
=
1

E
+

E
_
I
2

_
+
1

E
_
dI
dE

2
_
+
1

E
_
I

E
_

2
_
_
=

E
_
I
2

_
+
1

E
_
I

E
_

2
_
_
=

E
__
I(E) +
1

E
__
(E)
2
__
,
which is precisely equation (8.70).
Remarks 8.4.4. i. We emphasize that the above formal procedure does not provide us with the
boundary conditions for the limiting FokkerPlanck equation. We will discuss about this
issue in the next section.
ii. If we rescale back to the original time-scale we obtain the equation

t
=

E
__
I(E) +
1

E
__
(E)
2
__
. (8.71)
We will use this equation later on to calculate the rate of escape from a potential barrier in
the energy-diffusion-limited regime.
163
8.5 Brownian Motion in Periodic Potentials
Basic model
m x = x(t) V (x(t), f(t)) + y(t) +
_
2k
B
T(t), (8.72)
Goal: Calculate the effective drift and the effective diffusion tensor
U
eff
= lim
t
x(t))
t
(8.73)
and
D
eff
= lim
t
x(t) x(t))) (x(t) x(t))))
2t
. (8.74)
8.5.1 The Langevin equation in a periodic potential
We start by studying the underdamped dynamics of a Brownian particle x(t) R
d
moving in a
smooth, periodic potential.
x = V (x(t)) x(t) +
_
2k
B
T(t), (8.75)
where is the friction coefcient, k
B
the Boltzmann constant and T denotes the temperature. (t)
stands for the standard ddimensional white noise process, i.e.

i
(t)) = 0 and
i
(t)
j
(s)) =
ij
(t s), i, j = 1, . . . d.
The potential V (x) is periodic in x and satises |V (x)|
L
= 1 with period 1 in all spatial
directions:
V (x + e
i
) = V (x), i = 1, . . . , d,
where e
i

d
i=1
denotes the standard basis of R
d
.
Notice that we have already nondimensionalized eqn. (8.75) in such a way that the non
dimensional particle mass is 1 and the maximum of the (gradient of the) potential is xed [52].
Hence, the only parameters in the problem are the friction coefcient and the temperature. Notice,
furthermore, that the parameter in (8.75) controls the coupling between the Hamiltonian system
x = V (x) and the thermal heat bath: 1 implies that the Hamiltonian system is strongly
coupled to the heat bath, whereas 1 corresponds to weak coupling.
164
Equation (8.75) denes a Markov process in the phase space T
d
R
d
. Indeed, let us write
(8.75) as a rst order system
x(t) = y(t), (8.76a)
y(t) = V (x(t)) y(t) +
_
2k
B
T(t), (8.76b)
The process x(t), y(t) is Markovian with generator
L = y
x
V (x)
y
+ (y
y
+ D
y
) .
In writing the above we have set D = K
B
T. This process is ergodic. The unique invariant measure
is absolutely continuous with respect to the Lebesgue measure and its density is the Maxwell
Boltzmann distribution
(y, x) =
1
(2D)
n
2
Z
e

1
D
H(x,y)
, (8.77)
where Z =
_
T
d
e
V (x)/D
dx and H(x, y) is the Hamiltonian of the system
H(x, y) =
1
2
y
2
+ V (x).
The long time behavior of solutions to (8.75) is governed by an effective Brownian motion. Indeed,
the following central limit theorem holds [83, 70, ?]
Theorem 8.5.1. Let V (x) C(T
d
). Dene the rescaled process
x

(t) := x(t/
2
).
Then
x

(t) converges weakly, as 0, to a Brownian motion with covariance


D
eff
=
_
T
d
R
d
L (dx dy), (8.78)
where (dx dy) = (x, y)dxdy and the vector valued function is the solution of the Poisson
equation
L = y. (8.79)
We are interested in analyzing the dependence of D
eff
on . We will mostly focus on the one
dimensional case. We start by rescaling the Langevin equation (9.11)
x = F(x) x +
_
2
1
W, (8.80)
165
where we have set F(x) = V (x). We will assume that the potential is periodic with period 2
in every direction. Since we expect that at sufciently long length and time scales the particle per-
forms a purely diffusive motion, we perform a diffusive rescaling to the equations of motion (9.11):
t t/
2
, x
x

. Using the fact that



W(c t) =
1

W(t) in law we obtain:

2
x =
1

F
_
x

_
x +
_
2
1
W,
Introducing p = x and q = x/ we write this equation as a rst order system:
x =
1

p,
p =
1

2
F(q)
1

2
p +
1

1

W,
q =
1

2
p,
(8.81)
with the understanding that q [, ]
d
and x, p R
d
. Our goal now is to eliminate the fast
variables p, q and to obtain an equation for the slow variable x. We shall accomplish this by
studying the corresponding backward Kolmogorov equation using singular perturbation theory for
partial differential equations.
Let
u

(p, q, x, t) = Ef
_
p(t), q(t), x(t)[p(0) = p, q(0) = q, x(0) = x
_
,
where E denotes the expectation with respect to the Brownian motion W(t) in the Langevin equa-
tion and f is a smooth function.
2
The evolution of the function u

(p, q, x, t) is governed by the


backward Kolmogorov equation associated to equations (8.81) is [74]
3
u

t
=
1

p
x
u

+
1

2
_

q
V (q)
p
+ p
q
+
_
p
p
+
1

p
_
_
u

.
:=
_
1

2
L
0
+
1

L
1
_
u

, (8.82)
where:
L
0
=
q
V (q)
p
+ p
q
+
_
p
p
+
1

p
_
,
L
1
= p
x
2
In other words, we have that
u

(p, q, x, t) =
_
f(x, v, t; p, q)(x, v, t; p, q)(p, q) dpdqdxdv,
where (x, v, t; p, q) is the solution of the Fokker-Planck equation and (p, q) is the initial distribution.
3
it is more customary in the physics literature to use the forward Kolmogorov equation, i.e. the Fokker-Planck
equation. However, for the calculation presented below, it is more convenient to use the backward as opposed to the
forward Kolmogorov equation. The two formulations are equivalent. See [72, Ch. 6] for details.
166
The invariant distribution of the fast process
_
q(t), p(t)
_
in T
d
R
d
is the Maxwell-Boltzmann
distribution

(q, p) = Z
1
e
H(q,p)
, Z =
_
T
d
R
d
e
H(q,p)
dqdp,
where H(q, p) =
1
2
[p[
2
+ V (q). Indeed, we can readily check that
L

(q, p) = 0,
where L

0
denotes the Fokker-Planck operator which is the L
2
-adjoint of the generator of the pro-
cess L
0
:
L

0
f =
q
V (q)
p
f p
q
f +
_

p
(pf) +
1

p
f
_
.
The null space of the generator L
0
consists of constants in q, p. Moreover, the equation
L
0
f = g, (8.83)
has a unique (up to constants) solution if and only if
g)

:=
_
T
d
R
d
g(q, p)

(q, p) dqdp = 0. (8.84)


Equation (8.83) is equipped with periodic boundary conditions with respect to z and is such that
_
T
d
R
d
[f[
2

dqdp < . (8.85)


These two conditions are sufcient to ensure existence and uniqueness of solutions (up to con-
stants) of equation (8.83) [38, 39, 70].
We assume that the following ansatz for the solution u

holds:
u

= u
0
+ u
1
+
2
u
2
+ . . . (8.86)
with u
i
= u
i
(p, q, x, t), i = 1, 2, . . . being 2 periodic in q and satisfying condition (8.85). We
substitute (8.86) into (8.82) and equate equal powers in to obtain the following sequence of
equations:
L
0
u
0
= 0, (8.87a)
L
0
u
1
= L
1
u
0
, (8.87b)
L
0
u
2
= L
1
u
1
+
u
0
t
. (8.87c)
167
From the rst equation in (8.87) we deduce that u
0
= u
0
(x, t), since the null space of L
0
consists
of functions which are constants in p and q. Now the second equation in (8.87) becomes:
L
0
u
1
= p
x
u
0
.
Since p) = 0, the right hand side of the above equation is mean-zero with respect to the Maxwell-
Boltzmann distribution. Hence, the above equation is well-posed. We solve it using separation of
variables:
u
1
= (p, q)
x
u
0
with
L
0
= p. (8.88)
This Poisson equation is posed on T
d
R
d
. The solution is periodic in q and satises condi-
tion (8.85). Now we proceed with the third equation in (8.87). We apply the solvability condition
to obtain:
u
0
t
=
_
T
d
R
d
L
1
u
1

(p, q) dpdq
=
d

i,j=1
__
T
d
R
d
p
i

(p, q) dpdq
_

2
u
0
x
i
x
j
.
This is the Backward Kolmogorov equation which governs the dynamics on large scales. We write
it in the form
u
0
t
=
d

i,j=1
D
ij

2
u
0
x
i
x
j
(8.89)
where the effective diffusion tensor is
D
ij
=
_
T
d
R
d
p
i

(p, q) dpdq, i, j = 1, . . . d. (8.90)


The calculation of the effective diffusion tensor requires the solution of the boundary value problem
(8.88) and the calculation of the integral in (8.90). The limiting backward Kolmogorov equation
is well posed since the diffusion tensor is nonnegative. Indeed, let be a unit vector in R
d
. We
calculate (we use the notation

= and , ) for the Euclidean inner product)


, D) =
_
(p )(

dpdq =
_
_
L
0

dpdq
=
1
_

dpdq 0, (8.91)
168
where an integration by parts was used.
Thus, from the multiscale analysis we conclude that at large lenght/time scales the particle
which diffuses in a periodic potential performs and effective Brownian motion with a nonnegative
diffusion tensor which is given by formula (8.90).
We mention in passing that the analysis presented above can also be applied to the problem of
Brownian motion in a tilted periodic potential. The Langevin equation becomes
x(t) = V (x(t)) + F x(t) +
_
2
1
W(t), (8.92)
where V (x) is periodic with period 2 and F is a constant force eld. The formulas for the
effective drift and the effective diffusion tensor are
V =
_
R
d
T
d
p(q, p) dqdp, D =
_
R
d
T
d
(p V ) (p, q) dpdq, (8.93)
where
L = p V, (8.94a)
L

= 0,
_
R
d
T
d
(p, q) dpdq = 1. (8.94b)
with
L = p
q
+ (
q
V + F)
p
+
_
p
p
+
1

p
_
. (8.95)
We have used to denote the tensor product between two vectors; L

denotes the L
2
-adjoint of the
operator L, i.e. the Fokker-Planck operator. Equations (8.94) are equipped with periodic boundary
conditions in q. The solution of the Poisson equation (8.94) is also taken to be square integrable
with respect to the invariant density (q, p):
_
R
d
T
d
[(q, p)[
2
(p, q) dpdq < +.
The diffusion tensor is nonnegative denite. A calculation similar to the one used to derive (8.91)
shows the positive deniteness of the diffusion tensor:
, D) =
1
_

2
(p, q) dpdq 0, (8.96)
for every vector in R
d
. The study of diffusion in a tilted periodic potential, in the underdamped
regime and in high dimensions, based on the above formulas for V and D, will be the subject of a
separate publication.
169
8.5.2 Equivalence With the Green-Kubo Formula
Let us now show that the formula for the diffusion tensor obtained in the previous section, equa-
tion (8.90), is equivalent to the Green-Kubo formula (3.14). To simplify the notation we will prove
the equivalence of the two formulas in one dimension. The generalization to arbitrary dimensions is
immediate. Let (x(t; q, p), v(t; q, p)) with v = x and initial conditions x(0; q, p) = q, v(0; q, p) =
p be the solution of the Langevin equation
x =
x
V x +
where (t) stands for Gaussian white noise in one dimension with correlation function
(t)(s)) = 2k
B
T(t s).
We assume that the (x, v) process is stationary, i.e. that the initial conditions are distributed ac-
cording to the Maxwell-Boltzmann distribution

(q, p) = Z
1
e
H(p,q)
.
The velocity autocorrelation function is [15, eq. 2.10]
v(t; q, p)v(0; q, p)) =
_
v p(x, v, t; p, q)

(p, q) dpdqdxdv, (8.97)


and (x, v, t; p, q) is the solution of the Fokker-Planck equation

t
= L

, (x, v, 0; p, q) = (x q)(v p),


where
L

= v
x
+
x
V (x)
v
+
_
(v) +
1

2
v

_
.
We rewrite (8.97) in the form
v(t; q, p)v(0; q, p)) =
_ _ __ _
v(x, v, t; p, q) dvdx
_
p

(p, q) dpdq
=:
_ _
v(t; p, q)p

(p, q) dpdq. (8.98)


The function v(t) satises the backward Kolmogorov equation which governs the evolution of
observables [74, Ch. 6]
v
t
= Lv, v(0; p, q) = p. (8.99)
170
We can write, formally, the solution of (8.99) as
v = e
/t
p. (8.100)
We combine now equations (8.98) and (8.100) to obtain the following formula for the velocity
autocorrelation function
v(t; q, p)v(0; q, p)) =
_ _
p
_
e
/t
p
_

(p, q) dpdq. (8.101)


We substitute this into the Green-Kubo formula to obtain
D =
_

0
v(t; q, p)v(0; q, p)) dt
=
_ __

0
e
/t
dt p
_
p

dpdq
=
_
_
L
1
p
_
p

dpdq
=
_

dpdq,
where is the solution of the Poisson equation (8.88). In the above derivation we have used the
formula L
1
=
_

0
e
/t
dt, whose proof can be found in [74, Ch. 11].
8.6 The Underdamped and Overdamped Limits of the Diffu-
sion Coefcient
In this section we derive approximate formulas for the diffusion coefcient which are valid in the
overdamped 1 and underdampled 1 limits. The derivation of these formulas is based on
the asymptotic analysis of the Poisson equation (8.88).
The Underdamped Limit
In this subsection we solve the Poisson equation (8.88) in one dimension perturbatively for small
. We shall use singular perturbation theory for partial differential equations. The operator L
0
that
appears in (8.88) can be written in the form
L
0
= L
H
+ L
OU
171
where L
H
stands for the (backward) Liouville operator associated with the Hamiltonian H(p, q)
and L
OU
for the generator of the OU process, respectively:
L
H
= p
q

q
V
p
, L
OU
= p
p
+
1

2
p
.
We expect that the solution of the Poisson equation scales like
1
when 1. Thus, we look
for a solution of the form
=
1

0
+
1
+
2
+ . . . (8.102)
We substitute this ansatz in (8.88) to obtain the sequence of equations
L
H

0
= 0, (8.103a)
L
H

1
= p +L
OU

0
, (8.103b)
L
H

2
= L
OU

1
. (8.103c)
From equation (8.103a) we deduce that, since the
0
is in the null space of the Liouville operator,
the rst term in the expansion is a function of the Hamiltonian z(p, q) =
1
2
p
2
+ V (q):

0
=
0
(z(p, q)).
Now we want to obtain an equation for
0
by using the solvability condition for (8.103b). To this
end, we multiply this equation by an arbitrary function of z, g = g(z) and integrate over p and q to
obtain
_
+

(p +L
OU

0
) g(z(p, q)) dpdq = 0.
We change now from p, q coordinates to z, q, so that the above integral becomes
_
+
E
min
_

g(z) (p(z, q) +L
OU

0
(z))
1
p(z, q)
dzdq = 0,
where J = p
1
(z, q) is the Jacobian of the transformation. Operator L
0
, when applied to functions
of the Hamiltonian, becomes:
L
OU
= (
1
p
2
)

z
+
1
p
2

2
z
2
.
Hence, the integral equation for
0
(z) becomes
_
+
E
min
_

g(z)
_
p(z, q) +
_
(
1
p
2
)

z
+
1
p
2

2
z
2
_

0
(z)
_
1
p(z, q)
dzdq = 0.
172
Let E
0
denote the critical energy, i.e. the energy along the separatrix (homoclinic orbit). We set
S(z) =
_
x
2
(z)
x
1
(z)
p(z, q) dq, T(z) =
_
x
2
(z)
x
1
(z)
1
p(z, q)
dq,
where Riskens notation [82, p. 301] has been used for x
1
(z) and x
2
(z).
We need to consider the cases
_
z > E
0
, p > 0
_
,
_
z > E
0
, p < 0
_
and
_
E
min
< z < E
0
_
separately.
We consider rst the case E > E
0
, p > 0. In this case x
1
(x) = , x
2
(z) = . We can
perform the integration with respect to q to obtain
_
+
E
0
g(z)
_
2 +
_
(
1
T(z) S(z))

z
+
1
S(z)

2
z
2
_

0
(z)
_
dz = 0,
This equation is valid for every test function g(z), from which we obtain the following differential
equation for
0
:
L :=
1
1
T(z)
S(z)
tt
+
_
1
T(z)
S(z)
1
_

t
=
2
T(z)
, (8.104)
where primes denote differentiation with respect to z and where the subscript 0 has been dropped
for notational simplicity.
A similar calculation shows that in the regions E > E
0
, p < 0 and E
min
< E < E
0
the
equation for
0
is
L =
2
T(z)
, E > E
0
, p < 0 (8.105)
and
L = 0, E
min
< E < E
0
. (8.106)
Equations (8.104), (8.105), (8.106) are augmented with condition (8.85) and a continuity condition
at the critical energy [27]
2
t
3
(E
0
) =
t
1
(E
0
) +
t
2
(E
0
), (8.107)
where
1
,
2
,
3
are the solutions of equations (8.104), (8.105) and (8.106), respectively.
The average of a function h(q, p) = h(q, p(z, q)) can be written in the form [82, p. 303]
h(q, p))

:=
_

h(q, p)

(q, p) dqdp
= Z
1

_
+
E
min
_
x
2
(z)
x
1
(z)
_
h(q, p(z, q)) + h(q, p(z, q))
_
(p(q, z))
1
e
z
dzdq,
173
where the partition function is
Z

=
_
2

e
V (q)
dq.
From equation (8.106) we deduce that
3
(z) = 0. Furthermore, we have that
1
(z) =
2
(z).
These facts, together with the above formula for the averaging with respect to the Boltzmann
distribution, yield:
D = p(p, q))

= p
0
)

+O(1) (8.108)

Z
1

_
+
E
0

0
(z)e
z
dzO(1)
=
4

Z
1

_
+
E
0

0
(z)e
z
dz, (8.109)
to leading order in , and where
0
(z) is the solution of the two point boundary value prob-
lem (8.104). We remark that if we start with formula D =
1
[
p
[
2
)

for the diffusion coef-


cient, we obtain the following formula, which is equivalent to (8.109):
D =
4

Z
1

_
+
E
0
[
z

0
(z)[
2
e
z
dz.
Now we solve the equation for
0
(z) (for notational simplicity, we will drop the subscript 0 ).
Using the fact that S
t
(z) = T(z), we rewrite (8.104) as

1
(S
t
)
t
+ S
t
= 2.
This equation can be rewritten as

1
_
e
z
S
t
_
= e
z
.
Condition (8.85) implies that the derivative of the unique solution of (8.104) is

t
(z) = S
1
(z).
We use this in (8.109), together with an integration by parts, to obtain the following formula for
the diffusion coefcient:
D =
1

8
2
Z
1


1
_
+
E
0
e
z
S(z)
dz. (8.110)
We emphasize the fact that this formula is exact in the limit as 0 and is valid for all periodic
potentials and for all values of the temperature.
174
Consider now the case of the nonlinear pendulum V (q) = cos(q). The partition function is
Z

=
(2)
3/2

1/2
J
0
(),
where J
0
() is the modied Bessel function of the rst kind. Furthermore, a simple calculation
yields
S(z) = 2
5/2

z + 1E
_
_
2
z + 1
_
,
where E() is the complete elliptic integral of the second kind. The formula for the diffusion
coefcient becomes
D =
1

2
1/2
J
0
()
_
+
1
e
z

z + 1E(
_
2/(z + 1))
dz. (8.111)
We use now the asymptotic formula J
0
() (2)
1/2
e

, 1 and the fact that E(1) = 1 to


obtain the small temperature asymptotics for the diffusion coefcient:
D =
1

2
e
2
, 1, (8.112)
which is precisely formula (??), obtained by Risken.
Unlike the overdamped limit which is treated in the next section, it is not straightforward to ob-
tain the next order correction in the formula for the effective diffusivity. This is because, due to the
discontinuity of the solution of the Poisson equation (8.88) along the separatrix. In particular, the
next order correction to when 1 is of (
1/2
), rather than (1) as suggested by ansatz (8.102).
Upon combining the formula for the diffusion coefcient and the formula for the hopping rate
from Kramers theory [41, eqn. 4.48(a)] we can obtain a formula for the mean square jump length
at low friction. For the cosine potential, and for 1, this formula is

2
) =

2
8
2

2
for 1, 1. (8.113)
The Overdamped Limit
In this subsection we study the large asymptotics of the diffusion coefcient. As in the previous
case, we use singular perturbation theory, e.g. [42, Ch. 8]. The regularity of the solution of (8.88)
when 1 will enable us to obtain the rst two terms in the
1

expansion without any difculty.


175
We set =
1

. The differential operator L


0
becomes
L
0
=
1

L
OU
+L
H
.
We look for a solution of (8.88) in the form of a power series expansion in :
=
0
+
1
+
2

2
+
3

3
+ . . . (8.114)
We substitute this into (8.88) and obtain the following sequence of equations:
L
OU

0
= 0, (8.115a)
L
OU

1
= p +L
H

0
, (8.115b)
L
OU

2
= L
H

1
, (8.115c)
L
OU

3
= L
H

2
. (8.115d)
The null space of the Ornstein-Uhlenbeck operator L
0
consists of constants in p. Consequently,
from the rst equation in (8.115) we deduce that the rst term in the expansion in independent of
p,
0
= (q). The second equation becomes
L
OU

1
= p(1 +
q
).
Let

(p) =
_
2

1
2
e

p
2
2
,
be the invariant distribution of the OU process (i.e. L

OU

(p) = 0). The solvability condition for


an equation of the form L
OU
= f requires that the right hand side averages to 0 with respect
to

(p), i.e. that the right hand side of the equation is orthogonal to the null space of the adjoint
of L
OU
. This condition is clearly satised for the equation for
1
. Thus, by Fredholm alternative,
this equation has a solution which is

1
(p, q) = (1 +
q
)p +
1
(q),
where the function
1
(q) of is to be determined. We substitute this into the right hand side of the
third equation to obtain
L
OU

2
= p
2

2
q

q
V (1 +
q
) + p
q

1
(q).
176
From the solvability condition for this we obtain an equation for (q):

2
q

q
V (1 +
q
) = 0, (8.116)
together with the periodic boundary conditions. The derivative of the solution of this two-point
boundary value problem is

q
+ 1 =
2
_

e
V (q)
dq
e
V (q)
. (8.117)
The rst two terms in the large expansion of the solution of equation (8.88) are
(p, q) = (q) +
1

(1 +
q
) +O
_
1

2
_
,
where (q) is the solution of (8.116). Substituting this in the formula for the diffusion coefcient
and using (8.117) we obtain
D =
_

(p, q) dpdq =
4
2
Z

Z
+O
_
1

3
_
,
where Z =
_

e
V (q)
,

Z =
_

e
V (q)
. This is, of course, the Lifson-Jackson formula which
gives the diffusion coefcient in the overdamped limit [54]. Continuing in the same fashion, we
can also calculate the next two terms in the expansion (8.114), see Exercise 4. From this, we can
compute the next order correction to the diffusion coefcient. The nal result is
D =
4
2
Z

Z

4
2
Z
1

3
Z

Z
2
+O
_
1

5
_
, (8.118)
where Z
1
=
_

[V
t
(q)[
2
e
V (q)
dq.
In the case of the nonlinear pendulum, V (q) = cos(q), formula (8.118) gives
D =
1

J
2
0
()

3
_
J
2
()
J
3
0
()
J
2
0
()
_
+O
_
1

5
_
, (8.119)
where J
n
() is the modied Bessel function of the rst kind.
In the multidimensional case, a similar analysis leads to the large gamma asymptotics:
, D) =
1

, D
0
) +O
_
1

3
_
,
where is an arbitrary unit vector in R
d
and D
0
is the diffusion coefcient for the Smoluchowski
(overdamped) dynamics:
D
0
= Z
1
_
R
d
_
L
V

_
e
V (q)
dq (8.120)
177
where
L
V
=
q
V
q
+
1

q
and (q) is the solution of the PDE L
V
=
q
V with periodic boundary conditions.
Now we prove several properties of the effective diffusion tensor in the overdamped limit. For
this we will need the following integration by parts formula
_
T
d
_

_
dy =
_
T
d
_

y
()
y

_
dy =
_
T
d
(
y
) dy. (8.121)
The proof of this formula is left as an exercise, see Exercise 5.
Theorem 8.6.1. The effective diffusion tensor D
0
(8.120) satises the upper and lower bounds
D
Z

Z
, /) D[[
2
R
d
, (8.122)
where

Z =
_
T
d
e
V (y)/D
dy.
In particular, diffusion is always depleted when compared to molecular diffusivity. Furthermore,
the effective diffusivity is symmetric.
Proof. The lower bound follows from the general lower bound (??), equation (??) and the formula
for the Gibbs measure. To establish the upper bound, we use (8.121) and (??) to obtain
/ = DI + 2D
_
T
d
()
T
dy +
_
T
d

y
V dy
= DI 2D
_
T
d

y
dy +
_
T
d

y
V dy
= DI 2
_
T
d

y
V dy +
_
T
d

y
V dy
= DI
_
T
d

y
V dy
= DI
_
T
d
_
L
0

_
dy
= DI D
_
T
d
_

y

y

_
dy. (8.123)
Hence, for

= ,
, /) = D[[
2
D
_
T
d
[
y

[
2
dy
D[[
2
.
This proves depletion. The symmetry of / follows from (8.123).
178
The One Dimensional Case
The one dimensional case is always in gradient form: b(y) =
y
V (y). Furthermore in one
dimension we can solve the cell problem (??) in closed form and calculate the effective diffusion
coefcient explicitlyup to quadratures. We start with the following calculation concerning the
structure of the diffusion coefcient.
/ = D + 2D
_
1
0

y
dy +
_
1
0

y
V dy
= D + 2D
_
1
0

y
dy + D
_
1
0

y
dy
= D + 2D
_
1
0

y
dy D
_
1
0

y
dy
= D
_
1
0
_
1 +
y

_
dy. (8.124)
The cell problem (??) in one dimension is
D
yy

y
V
y
=
y
V. (8.125)
We multiply equation (8.125) by e
V (y)/D
to obtain

y
_

y
e
V (y)/D
_
=
y
_
e
V (y)/D
_
.
We integrate this equation from 0 to 1 and multiply by e
V (y)/D
to obtain

y
(y) = 1 + c
1
e
V (y)/D
.
Another integration yields
(y) = y + c
1
_
y
0
e
V (y)/D
dy + c
2
.
The periodic boundary conditions imply that (0) = (1), from which we conclude that
1 + c
1
_
1
0
e
V (y)/D
dy = 0.
Hence
c
1
=
1

Z
,

Z =
_
1
0
e
V (y)/D
dy.
179
We deduce that

y
= 1 +
1

Z
e
V (y)/D
.
We substitute this expression into (8.124) to obtain
/ =
D
Z
_
1
0
(1 +
y
(y)) e
V (y)/D
dy
=
D
Z

Z
_
1
0
e
V (y)/D
e
V (y)/D
dy
=
D
Z

Z
, (8.126)
with
Z =
_
1
0
e
V (y)/D
dy,

Z =
_
1
0
e
V (y)/D
dy. (8.127)
The Cauchy-Schwarz inequality shows that Z

Z 1. Notice that in the onedimensional case the
formula for the effective diffusivity is precisely the lower bound in (8.122). This shows that the
lower bound is sharp.
Example 8.6.2. Consider the potential
V (y) =
_
a
1
: y [0,
1
2
],
a
2
: y (
1
2
, 1],
(8.128)
where a
1
, a
2
are positive constants.
4
It is straightforward to calculate the integrals in (8.127) to obtain the formula
/ =
D
cosh
2
_
a
1
a
2
D
_. (8.129)
In Figure 8.2 we plot the effective diffusivity given by (8.129) as a function of the molecular
diffusivity D. We observe that / decays exponentially fast in the limit as D 0.
8.6.1 Brownian Motion in a Tilted Periodic Potential
In this appendix we use our method to obtain a formula for the effective diffusion coefcient of an
overdamped particle moving in a one dimensional tilted periodic potential. This formula was rst
4
Of course, this potential is not even continuous, let alone smooth, and the theory as developed in this chapter
does not apply. It is possible, however, to consider a regularized version of this discontinuous potential and then
homogenization theory applies.
180
Figure 8.2: Effective diffusivity versus molecular diffusivity for the potential (8.128).
derived and analyzed in [80, 79] without any appeal to multiscale analysis. The equation of motion
is
x = V
t
(x) + F +

2D, (8.130)
where V (x) is a smooth periodic function with period L, F and D > 0 constants and (t) standard
white noise in one dimension. To simplify the notation we have set = 1.
The stationary FokkerPlanck equation corresponding to(8.130) is

x
((V
t
(x) F) (x) + D
x
(x)) = 0, (8.131)
with periodic boundary conditions. Formula (10.13) for the effective drift now becomes
U
eff
=
_
L
0
(V
t
(x) + F)(x) dx. (8.132)
The solution of eqn. (8.131) is [77, Ch. 9]
(x) =
1
Z
_
x+L
x
dyZ
+
(y)Z

(x), (8.133)
with
Z

(x) := e

1
D
(V (x)Fx)
,
181
and
Z =
_
L
0
dx
_
x+L
x
dyZ
+
(y)Z

(x). (8.134)
Upon using (8.133) in (8.132) we obtain [77, Ch. 9]
U
eff
=
DL
Z
_
1 e

F L
D
_
. (8.135)
Our goal now is to calculate the effective diffusion coefcient. For this we rst need to solve the
Poisson equation (10.20) which now becomes
L(x) := D
xx
(x) + (V
t
(x) + F)
x
= V
t
(x) F + U
eff
, (8.136)
with periodic boundary conditions. Then we need to evaluate the integrals in (10.18):
D
eff
= D +
_
L
0
(V
t
(x) + F U
eff
)(x) dx + 2D
_
L
0

x
(x)(x) dx.
It will be more convenient for the subsequent calculation to rewrite the above formula for the effec-
tive diffusion coefcient in a different form. The fact that (x) solves the stationary FokkerPlanck
equation, together with elementary integrations by parts yield that, for all sufciently smooth peri-
odic functions (x),
_
L
0
(x)(L(x))(x) dx = D
_
L
0
(
x
(x))
2
(x) dx.
Now we have
D
eff
= D +
_
L
0
(V
t
(x) + F U
eff
)(x)(x) dx + 2D
_
L
0

x
(x)(x) dx
= D +
_
L
0
(L(x))(x)(x) dx + 2D
_
L
0

x
(x)(x) dx
= D + D
_
L
0
(
x
(x))
2
(x) dx + 2D
_
L
0

x
(x)(x) dx
= D
_
L
0
(1 +
x
(x))
2
(x) dx. (8.137)
Now we solve the Poisson equation (8.136) with periodic boundary conditions. We multiply the
equation by Z

(x) and divide through by D to rewrite it in the form

x
(
x
(x)Z

(x)) =
x
Z

(x) +
U
eff
D
Z

(x).
182
We integrate this equation from x L to x and use the periodicity of (x) and V (x) together with
formula (8.135) to obtain

x
(x)Z

(x)
_
1 e

F L
D
_
= Z

(x)
_
1 e

F L
D
_
+
L
Z
_
1 e

F L
D
_
_
x
xL
Z

(y) dy,
from which we immediately get

x
(x) + 1 =
1
Z
_
x
xL
Z

(y)Z
+
(x) dy.
Substituting this into (8.137) and using the formula for the invariant distribution (8.133) we nally
obtain
D
eff
=
D
Z
3
_
L
0
(I
+
(x))
2
I

(x) dx, (8.138)


with
I
+
(x) =
_
x
xL
Z

(y)Z
+
(x) dy and I

(x) =
_
x+L
x
Z
+
(y)Z

(x) dy.
Formula (8.138) for the effective diffusion coefcient (formula (22) in [79]) is the main result of
this section.
8.7 Numerical Solution of the Klein-Kramers Equation
8.8 Discussion and Bibliography
The rigorous study of the overdamped limit can be found in [68]. A similar approximation theorem
is also valid in innite dimensions (i.e. for SPDEs); see [10, 11].
More information about the underdamped limit of the Langevin equation can be found at [89,
28, 29].
We also mention in passing that the various formulae for the effective diffusion coefcient
that have been derived in the literature [34, 54, 80, 85] can be obtained from equation (??): they
correspond to cases where equations (??) and (??) can be solved analytically. An examplethe
calculation of the effective diffusion coefcient of an overdamped Brownian particle in a tilted
periodic potentialis presented in appendix. Similar calculations yield analytical expressions for
all other exactly solvable models that have been considered in the literature.
183
8.9 Exercises
1. Let

L be the generator of the two-dimensional Ornstein-Uhlenbeck operator (8.17). Calculate
the eigenvalues and eigenfunctions of

L. Show that there exists a transformation that transforms

L into the Schr odinger operator of the two-dimensional quantum harmonic oscillator.
2. Let

L be the operator dened in (8.34)
(a) Show by direct substitution that

L can be written in the form

L =
1
(c

(c
+
)

2
(d

(d
+
)

.
(b) Calculate the commutators
[(c
+
)

, (c

], [(d
+
)

, (d

], [(c

, (d

], [

L, (c

], [

L, (d

].
3. Show that the operators a

, b

dened in (8.15) and (8.16) satisfy the commutation relations


[a
+
, a

] = 1, (8.139a)
[b
+
, b

] = 1, (8.139b)
[a

, b

] = 0. (8.139c)
4. Obtain the second term in the expansion (8.118).
5. Prove formula (8.121).
184
Chapter 9
Exit Time Problems
9.1 Introduction
9.2 Brownian Motion in a Bistable Potential
There are many systems in physics, chemistry and biology that exist in at least two stable states.
Among the many applications we mention the switching and storage devices in computers. An-
other example is biological macromolecules that can exist in many different states. The problems
that we would like to solve are:
How stable are the various states relative to each other.
How long does it take for a system to switch spontaneously from one state to another?
How is the transfer made, i.e. through what path in the relevant state space? There is a lot of
important current work on this problem by E, Vanden Eijnden etc.
How does the system relax to an unstable state?
We can separate between the 1d problem, the nite dimensional problem and the innite dimen-
sional problem (SPDEs). We we will solve completely the one dimensional problem and discuss
in some detail about the nite dimensional problem. The innite dimensional situation is an ex-
tremely hard problem and we will only make some remarks. The study of bistability and metasta-
bility is a very active research area, in particular the development of numerical methods for the
calculation of various quantities such as reaction rates, transition pathways etc.
185
We will mostly consider the dynamics of a particle moving in a bistable potential, under the
inuence of thermal noise in one dimension:
x = V
t
(x) +
_
2k
B
T

. (9.1)
An example of the class of potentials that we will consider is shown in Figure. It has to local
minima, one local maximum and it increases at least quadratically at innity. This ensures that the
state space is compact, i.e. that the particle cannot escape at innity. The standard potential that
satises these assumptions is
V (x) =
1
4
x
4

1
2
x
2
+
1
4
. (9.2)
It is easily checked that this potential has three local minima, a local maximum at x = 0 and two
local minima at x = 1. The values of the potential at these three points are:
V (1) = 0, V (0) =
1
4
.
We will say that the height of the potential barrier is
1
4
. The physically (and mathematically!)
interesting case is when the thermal uctuations are weak when compared to the potential barrier
that the particle has to climb over.
More generally, we assume that the potential has two local minima at the points a and c and a
local maximum at b. Let us consider the problem of the escape of the particle from the left local
minimum a. The potential barrier is then dened as
E = V (b) V (a).
186
Our assumption that the thermal uctuations are weak can be written as
k
B
T
E
1.
In this limit, it is intuitively clear that the particle is most likely to be found at either a or c. There
it will perform small oscillations around either of the local minima. This is a result that we can
obtain by studying the small temperature limit by using perturbation theory. The result is that we
can describe locally the dynamics of the particle by appropriate OrnsteinUhlenbeck processes.
Of course, this result is valid only for nite times: at sufciently long times the particle can escape
from the one local minimum, a say, and surmount the potential barrier to end up at c. It will then
spend a long time in the neighborhood of c until it escapes again the potential barrier and end at
a. This is an example of a rare event. The relevant time scale, the exit time or the mean rst
passage time scales exponentially in := (k
B
T)
1
:
=
1
exp(E).
It is more customary to calculate the reaction rate :=
1
which gives the rate with which
particles escape from a local minimum of the potential:
= exp(E). (9.3)
It is very important to notice that the escape from a local minimum, i.e. a state of local stability,
can happen only at positive temperatures: it is a noise assisted event. Indeed, consider the case
T = 0. The equation of motion becomes
x = V
t
(x), x(0) = x
0
.
In this case the potential becomes a Lyapunov function:
dx
dt
= V
t
(x)
dx
dt
= (V
t
(x))
2
< 0.
Hence, depending on the initial condition the particle will converge either to a or c. The particle
cannot escape from either state of local stability.
On the other hand, at high temperatures the particle does not see the potential barrier: it
essentially jumps freely from one local minimum to another.
187
To get a better understanding of the dependence of the dynamics on the depth of the potential
barrier relative to temperature, we solve the equation of motion (10.3) numerically. In Figure we
present the time series of the particle position. We observe that at small temperatures the particle
spends most of its time around x = 1 with rapid transitions from 1 to 1 and back.
9.3 The Mean First Passage Time
The Arrhenius-type factor in the formula for the reaction rate, eqn. (9.3) is intuitively and it has
been observed experimentally in the late nineteenth century by Arrhenius and others. What is
extremely important both from a theoretical and an applied point of view is the calculation of the
prefactor , the rate coefcient. A systematic approach for the calculation of the rate coefcient,
as well as the justication of the Arrhenius kinetics, is that of the mean rst passage time method
(MFPT). Since this method is of independent interest and is useful in various other contexts, we
will present it in a quite general setting and apply it to the problem of the escape from a potential
barrier in later sections. We will rst treat the one dimensional problem and then extend the theory
to arbitrary nite dimensions.
We will restrict ourselves to the case of homogeneous Markov processes. It is not very easy to
extend the method to non-Markovian processes.
9.3.1 The Boundary Value Problem for the MFPT
Let X
t
be a continuous time diffusion process on R
d
whose evolution is governed by the SDE
dX
x
t
= b(X
x
t
) dt + (X
x
t
) dW
t
, X
x
0
= x. (9.4)
Let D be a bounded subset of R
d
with smooth boundary. Given x D, we want to know how long
it takes for the process X
t
to leave the domain D for the rst time

x
D
= inf t 0 : X
x
t
/ D .
Clearly, this is a random variable. The average of this random variable is called the mean rst
passage time MFPT or the rst exit time:
(x) := E
x
D
.
We can calculate the MFPT by solving an appropriate boundary value problem.
188
Theorem 9.3.1. The MFPT is the solution of the boundary value problem
L = 1, x D, (9.5a)
= 0, x D, (9.5b)
where L is the generator of the SDE 9.5.
The homogeneous Dirichlet boundary conditions correspond to an absorbing boundary: the
particles are removed when they reach the boundary. Other choices of boundary conditions are
also possible. The rigorous proof of Theorem 9.3.1 is based on It os formula.
Proof. Let (X, x, t) be the probability distribution of the particles that have not left the domain
D at time t. It solves the FP equation with absorbing boundary conditions.

t
= L

, (X, x, 0) = (X x), [
D
= 0. (9.6)
We can write the solution to this equation in the form
(X, x, t) = e
/

t
(X x),
where the absorbing boundary conditions are included in the denition of the semigroup e
/

t
. The
homogeneous Dirichlet (absorbing) boundary conditions imply that
lim
t+
(X, x, t) = 0.
That is: all particles will eventually leave the domain. The (normalized) number of particles that
are still inside D at time t is
S(x, t) =
_
D
(X, x, t) dx.
Notice that this is a decreasing function of time. We can write
S
t
= f(x, t),
189
where f(x, t) is the rst passage times distribution. The MFPT is the rst moment of the distri-
bution f(x, t):
(x) =
_
+
0
f(s, x)s ds =
_
+
0

dS
ds
s ds
=
_
+
0
S(s, x) ds =
_
+
0
_
D
(X, x, s) dXds
=
_
+
0
_
D
e
/

s
(X x) dXds
=
_
+
0
_
D
(X x)
_
e
/s
1
_
dXds =
_
+
0
_
e
/s
1
_
ds.
We apply L to the above equation to deduce:
L =
_
+
0
_
Le
/t
1
_
dt =
_
t
0
d
dt
_
Le
/t
1
_
dt
= 1.
9.3.2 Examples
In this section we consider a fewsimple examples for which we can calculate the mean rst passage
time in closed form.
Brownian motion with one absorbing and one reecting boundary.
We consider the problem of Brownian motion moving in the interval [a, b]. We assume that the left
boundary is absorbing and the right boundary is reecting. The boundary value problem for the
MFPT time becomes

d
2

dx
2
= 1, (a) = 0,
d
dx
(b) = 0. (9.7)
The solution of this equation is
(x) =
x
2
2
+ bx + a
_
a
2
b
_
.
The MFPT time for Brownian motion with one absorbing and one reecting boundary in the inter-
val [1, 1] is plotted in Figure 9.3.2.
190
Figure 9.1: The mean rst passage time for Brownian motion with one absorbing and one reecting
boundary.
Brownian motion with two reecting boundaries.
Consider again the problem of Brownian motion moving in the interval [a, b], but now with both
boundaries being absorbing. The boundary value problem for the MFPT time becomes

d
2

dx
2
= 1, (a) = 0, (b) = 0. (9.8)
The solution of this equation is
(x) =
x
2
2
+ bx + a
_
a
2
b
_
.
The MFPT time for Brownian motion with two absorbing boundaries in the interval [1, 1] is
plotted in Figure 9.3.2.
The Mean First Passage Time for a One-Dimensional Diffusion Process
Consider now the mean exit time problem from an interval [a, b] for a general one-dimensional
diffusion process with generator
L = a(x)
d
dx
+
1
2
b(x)
d
2
dx
2
,
where the drift and diffusion coefcients are smooth functions and where the diffusion coefcient
b(x) is a strictly positive function (uniform ellipticity condition). In order to calculate the mean
191
Figure 9.2: The mean rst passage time for Brownian motion with two absorbing boundaries.
rst passage time we need to solve the differential equation

_
a(x)
d
dx
+
1
2
b(x)
d
2
dx
2
_
= 1, (9.9)
together with appropriate boundary conditions, depending on whether we have one absorbing and
one reecting boundary or two absorbing boundaries. To solve this equation we rst dene the
function (x) through
t
(x) = 2a(x)/b(x) to write (9.9) in the form
_
e
(x)

t
(x)
_
t
=
2
b(x)
e
(x)
The general solution of (9.9) is obtained after two integrations:
(x) = 2
_
x
a
e
(z)
dz
_
z
a
e
(y)
b(y)
dy + c
1
_
x
a
e
(y)
dy + c
2
,
where the constants c
1
and c
2
are to be determined from the boundary conditions. When both
boundaries are absorbing we get
(x) = 2
_
x
a
e
(z)
dz
_
z
a
e
(y)
b(y)
dy +
2

Z
Z
_
x
a
e
(y)
dy. (9.10)
9.4 Escape from a Potential Barrier
In this section we use the theory developed in the previous section to study the long time/small
temperature asymptotics of solutions to the Langevin equation for a particle moving in a one
192
dimensional potential of the form (9.2):
x = V
t
(x) x +
_
2k
B
T

W. (9.11)
In particular, we justify the Arrhenius formula for the reaction rate
= () exp(E)
and we calculate the escape rate = (). In particular, we analyze the dependence of the escape
rate on the friction coefcient. We will see that the we need to distinguish between the cases of
large and small friction coefcients.
9.4.1 Calculation of the Reaction Rate in the Overdamped Regime
We consider the Langevin equation (9.11) in the limit of large friction. As we saw in Section 8.4,
in the overdamped limit 1, the solution to (9.11) can be approximated by the solution to the
Smoluchowski equation (10.3)
x = V
t
(x) +
_
2
1
W.
We want to calculate the rate of escape from the potential barrier in this case. We assume that the
particle is initially at x
0
which is near a, the left potential minimum. Consider the boundary value
problem for the MFPT of the one dimensional diffusion process (10.3) from the interval (a, b):

1
e
V

x
_
e
V

_
= 1 (9.12)
We choose reecting BC at x = a and absorbing B.C. at x = b. We can solve (9.12) with these
boundary conditions by quadratures:
(x) =
1
_
b
x
dye
V (y)
_
y
0
dze
V (z)
. (9.13)
Now we can solve the problem of the escape from a potential well: the reecting boundary is at
x = a, the left local minimum of the potential, and the absorbing boundary is at x = b, the local
maximum. We can replace the B.C. at x = a by a repelling B.C. at x = :
(x) =
1
_
b
x
dye
V (y)
_
y

dze
V (z)
.
193
When E
b
1 the integral wrt z is dominated by the value of the potential near a. Furthermore,
we can replace the upper limit of integration by :
_
z

exp(V (z)) dz
_
+

exp(V (a)) exp


_

2
0
2
(z a)
2
_
dz
= exp (V (a))

2
0
,
where we have used the Taylor series expansion around the minimum:
V (z) = V (a) +
1
2

2
0
(z a)
2
+ . . .
Similarly, the integral wrt y is dominated by the value of the potential around the saddle point. We
use the Taylor series expansion
V (y) = V (b)
1
2

2
b
(y b)
2
+ . . .
Assuming that x is close to a, the minimum of the potential, we can replace the lower limit of
integration by . We nally obtain
_
b
x
exp(V (y)) dy
_
b

exp(V (b)) exp


_

2
b
2
(y b)
2
_
dy
=
1
2
exp (V (b))

2
b
.
Putting everything together we obtain a formula for the MFPT:
(x) =

b
exp (E
b
) .
The rate of arrival at b is 1/. Only have of the particles escape. Consequently, the escape rate (or
reaction rate), is given by
1
2
:
=

0

b
2
exp (E
b
) .
9.4.2 The Intermediate Regime: = O(1)
Consider now the problem of escape from a potential well for the Langevin equation
q =
q
V (q) q +
_
2
1
W. (9.14)
194
The reaction rate depends on the ction coefcient and the temperature. In the overdamped
limit ( 1) we retrieve (??), appropriately rescaled with :
=

0

b
2
exp (E
b
) . (9.15)
We can also obtain a formula for the reaction rate for = O(1):
=
_

2
4

2
b


2

0
2
exp (E
b
) . (9.16)
Naturally, in the limit as +(9.16) reduces to (9.15)
9.4.3 Calculation of the Reaction Rate in the energy-diffusion-limited regime
In order to calculate the reaction rate in the underdamped or energy-diffusion-limited regime
1 we need to study the diffusion process for the energy, (8.69) or (8.70). The result is
= I(E
b
)

0
2
e
E
b
, (9.17)
where I(E
b
) denotes the action evaluated at b.
9.5 Discussion and Bibliography
The calculation of reaction rates and the stochastic modeling of chemical reactions has been a very
active area of research since the 30s. One of the rst methods that were developed was that of
transition state theory. Kramers developed his theory in his celebrated paper [49]. In this chapter
we have based our approach to the calculation of the mean rst passage time. Our analysis is
based mostly on [35, Ch. 5, Ch. 9], [96, Ch. 4] and the excellent review article [41]. We highly
recommend this review article for further information on reaction rate theory. See also [40] and
the review article of Melnikov (1991). A formula for the escape rate which is valid for all values
of friction coefcient was obtained by Melnikov and Meshkov in 1986, J. Chem. Phys 85(2) 1018-
1027. This formula requires the calculation of integrals and it reduced to (9.15) and (9.17) in the
overdamped and underdamped limits, respectively.
There are many applications of interest where it is important to calculate reaction rates for
non-Markovian Langevin equations of the form
x = V
t
(x)
_
t
0
b(t s) x(s) ds + (t) (9.18a)
195
(t)(0)) = k
B
TM
1
(t) (9.18b)
We will derive generalized nonMarkovian equations of the form (9.18a), together with the
uctuationdissipation theorem (11.10), in Chapter 11. The calculation of reaction rates for the
generalized Langevin equation is presented in [40].
The long time/small temperature asymptotics can be studied rigorously by means of the theory
of Freidlin-Wentzell [29]. See also [6]. A related issue is that of the small temperature asymptotics
for the eigenvalues (in particular, the rst eigenvalue) of the generator of the Markov process x(t)
which is the solution of
x = V (x) +
_
2k
B
T

W.
The theory of Freidlin and Wentzell has also been extended to innite dimensional problems. This
is a very important problem in many applications such as micromagnetics...We refer to CITE...
for more details.
A systematic study of the problem of the escape from a potential well was developed by
Matkowsky, Schuss and collaborators [86, 63, 64]. This approach is based on a systematic use
of singular perturbation theory. In particular, the calculation of the transition rate which is uni-
formly valid in the friction coefcient is presented in [64]. This formula is obtained through a
careful analysis of the PDE
p
q

q
V
p
+ (p
p
+ k
B
T
2
p
) = 1,
for the mean rst passage time . The PDE is equipped, of course, with the appropriate boundary
conditions. Singular perturbation theory is used to study the small temperature asymptotics of
solutions to the boundary value problem. The formula derived in this paper reduces to the formulas
which are valid at large and small values of the friction coefcient at the appropriate asymptotic
limits.
The study of rare transition events between long lived metastable states is a key feature in
many systems in physics, chemistry and biology. Rare transition events play an important role,
for example, in the analysis of the transition between different conformation states of biological
macromolecules such as DNA [87]. The study of rare events is one of the most active research
areas in the applied stochastic processes. Recent developments in this area involve the transition
path theory of W. E and Vanden Eijnden. Various simple applications of this theory are presented
in Metzner, Schutte et al 2006. As in the mean rst passage time approach, transition path theory
196
is also based on the solution of an appropriate boundary value problem for the so-called commitor
function.
9.6 Exercises
197
198
Chapter 10
Stochastic Resonance and Brownian Motors
10.1 Introduction
10.2 Stochastic Resonance
10.3 Brownian Motors
10.4 Introduction
Particle transport in spatially periodic, noisy systems has attracted considerable attention over the
last decades, see e.g. [82, Ch. 11], [78] and the references therein. There are various physical
systems where Brownian motion in periodic potentials plays a prominent role, such as Josephson
junctions [3], surface diffusion [52, 84] and superionic conductors [33]. While the system of a
Brownian particle in a periodic potential is kept away from equilibrium by an external, determin-
istic or random, force, detailed balance does not hold. Consequently, and in the absence of any
spatial symmetry, a net particle current will appear, without any violation of the second law of
thermodynamics. It was this fundamental observation [60] that led to a revival of interest in the
problem of particle transport in periodic potentials with broken spatial symmetry. These types of
non equilibrium systems, which are often called Brownian motors or ratchets, have found new
and exciting applications e.g as the basis of theoretical models for various intracellular transport
processes such as molecular motors [9]. Furthermore, various experimental methods for particle
separation have been suggested which are based on the theory of Brownian motors [7].
The long time behavior of a Brownian particle in a periodic potential is determined uniquely
199
by the effective drift and the effective diffusion tensor which are dened, respectively, as
U
eff
= lim
t
x(t) x(0))
t
(10.1)
and
D
eff
= lim
t
x(t) x(t))) (x(t) x(t))))
2t
. (10.2)
Here x(t) denotes the particle position, ) denotes ensemble average and stands for the tensor
product. Indeed, an argument based on the central limit theorem [5, Ch. 3], [47] implies that at
long times the particle performs an effective Brownian motion which is a Gaussian process, and
hence the rst two moments are sufcient to determine the process uniquely. The main goal of all
theoretical investigations of noisy, nonequilibrium particle transport is the calculation of (10.1)
and (10.2). One wishes, in particular, to analyze the dependence of these two quantities on the
various parameters of the problem, such as the friction coefcient, the temperature and the particle
mass.
Enormous theoretical effort has been put into the study of Brownian ratchets and, more gen-
erally, of Brownian particles in spatially periodic potentials [78]. The vast majority of all these
theoretical investigations is concerned with the calculation of the effective drift for one dimen-
sional models. This is not surprising, since the theoretical tools that are currently available are
not sufcient for the analytical treatment of the multidimensional problem. This is only possible
when the potential and/or noise are such that the problem can be reduced to a one dimensional one
[19]. For more general multidimensional problems one has to resort to numerical simulations.
There are various applications, however, where the one dimensional analysis is inadequate. As an
example we mention the technique for separation of macromolecules in microfabricated sieves that
was proposed in [14]. In the twodimensional setting considered in this paper, an appropriately
chosen driving force in the y direction produces a constant drift in the x direction, but with a zero
net velocity in the y direction. On the other hand, a force in the x direction produces no drift in the
y direction. The theoretical analysis of this problem requires new technical tools.
Furthermore, the number of theoretical studies related to the calculation of the effective diffu-
sion tensor has also been scarce [34, 55, 79, 80, 85]. In these papers, relatively simple potentials
and/or forcing terms are considered, such as tilting periodic potentials or simple periodic in time
forcing. It is widely recognized that the calculation of the effective diffusion coefcient is tech-
nically more demanding than that of the effective drift. Indeed, as we will show in this paper, it
200
requires the solution of a Poisson equation, in addition to the solution of the stationary Fokker
Planck equation which is sufcient for the calculation of the effective drift. Diffusive, rather than
directed, transport can be potentially extremely important in the design of experimental setups for
particle selection [78, Sec 5.11] [85]. It is therefore desirable to develop systematic tools for the
calculation of the effective diffusion coefcient (or tensor, in the multidimensional setting).
From a mathematical point of view, nonequilibrium systems which are subject to unbiased
noise can be modelled as nonreversible Markov processes [76] and can be expressed in terms
of solutions to stochastic differential equations (SDEs). The SDEs which govern the motion of
a Brownian particle in a periodic potential possess inherent length and time scales: those related
to the spatial period of the potential and the temporal period (or correlation time) of the external
driving force. From this point of view the calculation of the effective drift and the effective dif-
fusion coefcient amounts to studying the behavior of solutions to the underlying SDEs at length
and time scales which are much longer than the characteristic scales of the system. A systematic
methodology for studying problems of this type, which is based on scale separation, has been de-
veloped many years ago [5, ?, ?]. The techniques developed in the aforementioned references are
appropriate for the asymptotic analysis of stochastic systems (and Markov processes in particular)
which are spatially and/or temporally periodic. The purpose of this work is to apply these multi-
scale techniques to the study Brownian motors in arbitrary dimensions, with particular emphasis
to the calculation of the effective diffusion tensor.
The rest of this paper is organized as follows. In section 10.5 we introduce the model that we
will study. In section 10.6 we obtain formulae for the effective drift and the effective diffusion
tensor in the case where all external forces are Markov processes. In section 10.7 we study the
effective diffusion coefcient for a Brownian particle in a periodic potential driven simultaneously
by additive Gaussian white and colored noise. Section ?? is reserved for conclusions. In Appendix
A we derive formulae for the effective drift and the effective diffusion coefcient for the case where
the Brownian particle is driven away from equilibrium by periodic in time external uctuations.
Finally, in appendix Bwe use the method developed in this paper to calculate the effective diffusion
coefcient of an overdamped particle in a one dimensional tilted periodic potential.
201
10.5 The Model
We consider the overdamped ddimensional stochastic dynamics for a state variable x(t) R
d
[78, sec. 3]
x(t) = V (x(t), f(t)) + y(t) +
_
2k
B
T(t), (10.3)
where is the friction coefcient, k
B
the Boltzmann constant and T denotes the temperature. (t)
stands for the standard ddimensional white noise process, i.e.

i
(t)) = 0 and
i
(t)
j
(s)) =
ij
(t s), i, j = 1, . . . d.
We take f(t) and y(t) to be Markov processes with respective state spaces E
f
, E
y
and generators
L
f
, L
y
. The potential V (x, f) is periodic in x for every f, with period L in all spatial directions:
V (x + L e
i
, f) = V (x, f), i = 1, . . . , d,
where e
i

d
i=1
denotes the standard basis of R
d
. We will use the notation Q = [0, L]
d
.
The processes f(t) and y(t) can be continuous in time diffusion processes which are con-
structed as solutions of stochastic differential equations, dichotomous noise [42, Ch. 9], more
general Markov chains etc. The (easier) case where f(t) and y(t) are deterministic, periodic func-
tions of time is treated in the appendix.
For simplicity, we have assumed that the temperature in (10.3) is constant. However, this
assumption is with no loss of generality, since eqn. (10.3) with a time dependent temperature can
be mapped to an equation with constant temperature and an appropriate effective potential [78, sec.
6]. Thus, the above framework is general enough to encompass most of the models that have been
studied in the literature, such as pulsating, tilting, or temperature ratchets. We remark that the state
variable x(t) does not necessarily denote the position of a Brownian particle. We will, however,
refer to x(t) as the particle position in the sequel.
The process x(t), f(t), y(t) in the extended phase space R
d
E
f
E
y
is Markovian with
generator
L = F(x, f, y)
x
+ D
x
+L
f
+L
y
,
where D :=
k
B
T

and
F(x, f, y) =
1

(V (x, f) + y) .
202
To this process we can associate the initial value problem for the backward Kolmogorov Equation
[69, Ch. 8]
u
t
= Lu, u(x, y, f, t = 0) = u
in
(x, y, f). (10.4)
which is, of course, the adjoint to the FokkerPlanck equation. Our derivation of formulae for the
effective drift and the effective diffusion tensor is based on singular perturbation analysis of the
initial value problem (10.4).
10.6 Multiscale Analysis
In this section we derive formulae for the effective drift and the effective diffusion tensor for x(t),
the solution of (10.3). Let us outline the basic philosophy behind the derivation of formulae (10.13)
and (10.18). We are interested in the long time, large scale behavior of x(t). For the analysis that
follows it is convenient to introduce a parameter 1 which in effect is the ratio between the
length scale dened through the period of the potential and a large macroscopic length scale at
which the motion of the particle is governed by and effective Brownian motion. The limit 0
corresponds to the limit of innite scale separation. The behavior of the system in this limit can be
analysed using singular perturbation theory.
We remark that the calculation of the effective drift and the effective diffusion tensor are per-
formed seperately, because a different rescaling is needed in each case. This is due to the fact that
advection and diffusion have different characteristic time scales.
10.6.1 Calculation of the Effective Drift
The backward Kolmogorov equation reads
u(x, y, f, t)
t
= (F(x, f, y)
x
+ D
x
+L
f
+L
y
) u(x, y, f, t). (10.5)
We rescale a space and time in (10.5) according to
x x, t t
and divide through by to obtain
u

t
=
1

_
F
_
(
x

, f, y
_

x
+ D
x
+L
f
+L
y
_
u

. (10.6)
203
We solve (10.6) pertubatively by looking for a solution in the form of a twoscale expansion
u

(x, f, y, t) = u
0
_
x,
x

, f, y, t
_
+ u
1
_
x,
x

, f, y, t
_
+
2
u
2
_
x,
x

, f, y, t
_
+ . . . . (10.7)
All terms in the expansion (10.7) are periodic functions of z = x/. From the chain rule we have

x

x
+
1

z
. (10.8)
Notice that we do not take the terms in the expansion (10.8) to depend explicitly on t/. This is
because the coefcients of the backward Kolmogorov equation (10.6) do not depend explicitly on
the fast time t/. In the case where the uctuations are periodic, rather than Markovian, in time,
we will need to assume that the terms in the multiscale expansion for u

(x, t) depend explicitly on


t/. The details are presented in the appendix.
We substitute now (10.7) into (10.5), use (10.8) and treat x and z as independent variables.
Upon equating the coefcients of equal powers in we obtain the following sequence of equations
L
0
u
0
= 0, (10.9)
L
0
u
1
= L
1
u
0
+
u
0
t
, (10.10)
. . . = . . . ,
where
L
0
= F(z, f, y)
z
+ D
z
+L
y
+L
f
(10.11)
and
L
1
= F(z, f, y)
x
+ 2D
z

x
.
The operator L
0
is the generator of a Markov process on Q E
y
E
f
. In order to proceed
we need to assume that this process is ergodic: there exists a unique stationary solution of the
FokkerPlanck equation
L

0
(z, y, f) = 0, (10.12)
with
_
QE
y
E
f
(z, y, f) dzdydf = 1
and
L

0
=
z
(F(z, f, y)) + D
z
+L

y
+L

f
.
204
In the above L

f
and L

y
are the FokkerPlanck operators of f and y, respectively. The stationary
density (z, y, f) satises periodic boundary conditions in z and appropriate boundary conditions
in f and y. We emphasize that the ergodicity of the fast process is necessary for the very exis-
tence of an effective drift and an effective diffusion coefcient, and it has been tacitly assumed in
all theoretical investigations concerning Brownian motors [78].
Under the assumption that (10.12) has a unique solution eqn. (10.9) implies, by Fredholm
alternative, that u
0
is independent of the fast scales:
u
0
= u(x, t).
Eqn. (10.10) now becomes
L
0
u
1
=
u(x, t)
t
F(z, y, f)
x
u(x, t).
In order for this equation to be well posed it is necessary that the right hand side averages to 0
with respect to the invariant distribution (z, f, y). This leads to the following backward Liouville
equation
u(x, t)
t
= U
eff

x
u(x, t),
with the effective drift given by
U
eff
=
_
QE
y
E
f
F(z, y, f)(z, y, f) dzdydf
=
1

_
QE
y
E
f
(V (x, f) + y) (z, y, f) dzdydf. (10.13)
10.6.2 Calculation of the Effective Diffusion Coefcient
We assume for the moment that the effective drift vanishes, U
eff
= 0. We perform a diffusive
rescaling in (10.5)
x x, t
2
t
and divide through by
2
to obtain
u

t
=
1

2
_
F
_
x

, f, y
_

x
+ D
x
+L
f
+L
y
_
u

, (10.14)
205
We go through the same analysis as in the previous subsection to obtain the following sequence of
equations.
L
0
u
0
= 0, (10.15)
L
0
u
1
= L
1
u
0
, (10.16)
L
0
u
2
= L
1
u
1
L
2
u
0
, (10.17)
. . . = . . . ,
where L
0
and L
1
were dened in the previous subsection and
L
2
=

t
+ D
x
.
Equation (10.15) implies that u
0
= u(x, t). Now (10.16) becomes
L
0
u
1
= F(z, y, f)
x
u(x, t).
Since we have assumed that U
eff
= 0, the right hand side of the above equation belongs to the null
space of L

0
and this equation is well posed. Its solution is
u
1
(x, z, f, y, t) = (z, y, f)
x
u(x, t),
where the auxiliary eld (z, y, f) satises the Poisson equation
L
0
(z, y, f) = F(z, y, f)
with periodic boundary conditions in z and appropriate boundary conditions in y and f.
We proceed now with the analysis of equation (10.17). The solvability condition for this equa-
tion reads
_
QE
y
E
f
(L
1
u
1
L
2
u
0
) dzdydf = 0,
from which, after some straightforward algebra, we obtain the limiting backward Kolmogorov
equation for u(x, t)
u(x, t)
t
=
d

i,j=1
D
eff
ij

2
u(x, t)
x
i
x
j
.
The effective diffusion tensor is
D
eff
ij
= D
ij
+

F
i
(z, y, f)
j
(z, y, f)
_

+ 2D
_

i
(z, y, f)
z
j
_

, (10.18)
206
where the notation )

for the averaging with respect to the invariant density has been introduced.
The case where the effective drift does not vanish, U
eff
,= 0, can be reduced to the situation
analyzed in this subsection through a Galilean transformation with respect to U
eff
1
. The effective
diffusion tensor is now given by
D
eff
ij
= D
ij
+
_
F
i
(z, y, f) U
i
eff
_

j
(z, y, f)
_

+2D
_

i
(z, y, f)
z
j
_

, (10.19)
and the eld (z, f, y) satises the Poisson equation
L
0
= F(z, y, f) U
eff
. (10.20)
10.7 Effective Diffusion Coefcient for Correlation Ratchets
In this section we consider the following model [4, 17]
x(t) = V (x(t)) + y(t) +
_
2k
B
T (t), (10.21a)
y(t) =
1

y(t) +
_
2

(t), (10.21b)
where (t) and (t) are mutually independent standard ddimensional white noise processes. The
potential V (x) is assumed to be Lperiodic in all spatial directions The process y(t) is the d
dimensional OnrsteinUhlenbeck (OU) process [35] which is a mean zero Gaussian process with
correlation function
y
i
(t)y
j
(s)) =
ij
e

|ts|

, i, j = 1, . . . , d.
Let z(t) denote the restriction of x(t) to Q = [0, 2]
d
. The generator of the Markov process
z(t), y(t) is
L =
1

(
z
V (z) + y)
z
+ D
z
+
1

(y
y
+
y
)
with D :=
k
B
T

. Standard results from the ergodic theory of Markov processes see e.g. [5, ch.
3] ensure that the process z(t), y(t) Q R
d
, with generator L is ergodic and that the unique
1
In other words, the process x
(
)(t) :=
_
x(t/
2
)
2
U
eff
t
_
converges to a mean zero Gaussian process with
effective diffusivity given by (10.19)
207
invariant measure has a smooth density (y, z) with respect to the Lebesgue measure. This is true
even at zero temperature [?, 65]. Hence, the results of section 10.6 apply: the effective drift and
effective diffusion tensor are given by formulae (10.13) and (10.18), respectively. Of course, in
order to calculate these quantities we need to solve equations (10.12) and (10.20) which take the
form:

z
((
z
V (z) + y)(y, z)) + D
z
(y, z) +
1

y
(y(y, z))
+
y
(y, z)
_
= 0
and

(
z
V (z) + y)
z
(y, z) D
z
(y, z)

_
y
y
(y, z) +
y
(y, z)
_
=
1

(
z
V (z) + y) U.
The effective diffusion tensor is positive denite. To prove this, let e be a unit vector in R
d
, dene
f = F e, u = U
eff
e and let := e denote the unique solution of the scalar problem
L = (F U) e =: f u, (y, z + L) = (y, z), )

= 0.
Let now h(y, z) be a sufciently smooth function. Elementary computations yield
L

(h) = Lh + 2D
z
(
z
h) +
2


y
(
y
h) .
We use the above calculation in the formula for the effective diffusion tensor, together with an
integration by parts and the fact that (y, z))

= 0, to obtain
e D
eff
e = D +f)

+ De
z
)

= D +u)

L)

+ 2De
y
)

= D + D[
z
[
2
)

+ 2De
y
)

[
y
[
2
)

= D[e +
z
[
2
)

[
y
[
2
)

.
From the above formula we see that the effective diffusion tensor is nonnegative denite and that
it is well dened even at zero temperature:
e D
eff
(T = 0) e =

[
y
(T = 0)[
2
)

.
208
Although we cannot solve these equations in closed form, it is possible to calculate the small
expansion of the effective drift and the effective diffusion coefcient, at least in one dimension.
Indeed, a tedious calculation using singular perturbation theory, e.g. [42, ?] yields
U
eff
= O(
3
), (10.22)
and
D
eff
=
L
2
Z

Z
_
D +
_
1 +
1
D
2
_
Z
2

Z
1
Z
___
+O(
2
). (10.23)
In writing eqn. (10.23) we have used the following notation
Z =
_
L
0
e

V (z)
D
dz,

Z =
_
L
0
e
V (z)
D
dz,
Z
1
=
_
L
0
V (z)e

V (z)
D
dz, Z
2
=
_
L
0
V (z)e
V (z)
D
dz.
It is relatively straightforward to obtain the next order correction to (10.23); the resulting formula
is, however, too complicated to be of much use.
The small asymptotics for the effective drift were also studied in [4, 17] for the model con-
sidered in this section and in [16, 92] when the external uctuations are given by a continuous
time Markov chain. It was shown in [16, 92] that, for the case of dichotomous noise, the small
expansion for U
eff
is valid only for sufciently smooth potentials. Indeed, the rst nonzero
termof order O(
3
)involves the second derivative of the potential. Nonsmooth potentials lead
to an effective drift which is O(
5
2
). On the contrary, eqn. (10.23) does not involve any deriva-
tives of the potential and, hence, is well dened even for nonsmooth potentials. On the other
hand, the O(
2
) term involves third order derivatives of the potential and can be dened only when
V (x) C
3
(0, L).
We also remark that the expansion (10.23) is only valid for positive temperatures. The problem
becomes substantially more complicated at zero temperature because the generator of the Markov
process becomes a degenerate differential operator at T = 0.
Naturally, in the limit as 0 the effective diffusion coefcient converges to its value for
y 0 :
D
eff
=
L
2
D
Z

Z
. (10.24)
This is the effective diffusion coefcient for a Brownian particle moving in a periodic potential, in
the absence of external uctuations [54, 94]. It is well known, and easy to prove, that the effective
209
Figure 10.1: Effective diffusivity for (10.21) with V (x) = cos(x) as a function of , for =
1, D =
k
B
T

= 1, = 1. Solid line: Results from Monte Carlo simulations. Dashed line: Results
from formula (10.23).
diffusion coefcient given by (10.24) is bounded from above by D. This not the case for the
effective diffusivity of the correlation ratchet (10.21).
We compare now the small asymptotics for the effective diffusion coefcient with Monte
Carlo simulations. The results presented in gures 1 and 2 were obtained from the numerical
solution of equations (10.21) using the EulerMarayama method, for the cosine potential V (x) =
cos(x). The integration step that was used was t = 10
4
and the total number of integration
steps was 10
7
. The effective diffusion coefcient was calculated by ensemble averaging over 2000
particle trajectories which were initially uniformly distributed on [0, 2].
In gure 1 we present the effective diffusion coefcient as a function of the correlation time
of the OU process. We also plot the results of the small asymptotics. The agreement between
theoretical predictions and numerical results is quite satisfactory, for 1. We also observe that
the effective diffusivity is an increasing function of .
In gure 2 we plot the effective diffusivity as a function of the noise strength of the OU
process. As expected, the effective diffusivity is an increasing function of . The agreement
between the theoretical predictions from (10.23) and the numerical experiments is excellent.
210
Figure 10.2: Effective diffusivity for (10.21) with V (x) = cos(x) as a function of , for =
0.1, D =
k
B
T

= 1, = 1. Solid line: Results from Monte Carlo simulations. Dashed line:


Results from formula (10.23).
10.8 Discussion and Bibliography
10.9 Exercises
1. In this appendix we derive formulae for the mean drift and the effective diffusion coefcient for
a Brownian particle which moves according to
x(t) = V (x(t), t) + y(t) +
_
2k
B
T(x(t), t)(t), (10.25)
for spacetime periodic potential V (x, t) and temperature T(x, t) > 0, and periodic in time
force y(t). We take the spatial period to be L in all directions and the temporal period of
V (x, t), T(x, t) and y(t) to be T . We use the notation Q = [0, L]
d
. Equation (10.25) is inter-
preted in the It o sense.
211
212
Chapter 11
Stochastic Processes and Statistical
Mechanics
11.1 Introduction
We will consider some simple particle + environment systems for which we can obtain rigorously
a stochastic equation that describes the dynamics of the Brownian particle.
We can describe the dynamics of the Brownian particle/uid system:
H(Q
N
, P
N
; q, p) = H
BP
(Q
N
, P
N
) + H
HB
(q, p) + H
I
(Q
N
, q), (11.1)
where q, p :=
_
q
j

N
j=1
, p
j

N
j=1
_
are the positions and momenta of the uid particles, N is the
number of uid (heat bath) particles (we will need to take the thermodynamic limit N +).
The initial conditions of the Brownian particle are taken to be xed, whereas the uid is assumed
to be initially in equilibrium (Gibbs distribution). Goal: eliminate the uid variables q, p :=
_
q
j

N
j=1
, p
j

N
j=1
_
to obtain a closed equation for the Brownian particle. We will see that this
equation is a stochastic integrodifferential equation, the Generalized Langevin Equation (GLE)
(in the limit as N +)

Q = V
t
(Q)
_
t
0
R(t s)

Q(s) ds + F(t), (11.2)
where R(t) is the memory kernel and F(t) is the noise. We will also see that, in some appropriate
limit, we can derive the Markovian Langevin equation (9.11).
213
11.2 The Kac-Zwanzig Model
Need to model the interaction between the heat bath particles and the coupling between the Brow-
nian particle and the heat bath. The simplest model is that of a harmonic heat bath and of linear
coupling:
H(Q
N
, P
N
, q, p) =
P
2
N
2
+ V (Q
N
) +
N

n=1
p
2
n
2m
n
+
1
2
k
n
(q
n
Q
N
)
2
. (11.3)
The initial conditions of the Brownian particle Q
N
(0), P
N
(0) := Q
0
, P
0
are taken to be
deterministic.
The initial conditions of the heat bath particles are distributed according to the Gibbs distribution,
conditional on the knowledge of Q
0
, P
0
:

(dpdq) = Z
1
e
H(q,p)
dqdp, (11.4)
where is the inverse temperature. This is a way of introducing the concept of the temperature in
the system (through the average kinetic energy of the bath particles). In order to choose the initial
conditions according to

(dpdq) we can take


q
n
(0) = Q
0
+
_

1
k
1
n

n
, p
n
(0) =
_
m
n

n
, (11.5)
where the
n

n
are mutually independent sequences of i.i.d. ^(0, 1) random variables. Notice that
we actually consider the Gibbs measure of an effective (renormalized) Hamiltonian. Other choices
for the initial conditions are possible. For example, we can take q
n
(0) =
_

1
k
1
n

n
. Our choice
of I.C. ensures that the forcing term in the GLE that we will derive is mean zero (see below).
Hamiltons equations of motion are:

Q
N
+ V
t
(Q
N
) =
N

n=1
k
n
(q
n

2
Q
N
), (11.6a)
q
n
+
2
n
(q
n
Q
N
) = 0, n = 1, . . . N, (11.6b)
where
2
n
= k
n
/m
n
. The equations for the heat bath particles are second order linear inhomoge-
neous equations with constant coefcients. Our plan is to solve them and then to substitute the
result in the equations of motion for the Brownian particle. We can solve the equations of motion
214
for the heat bath variables using the variation of constants formula
q
n
(t) = q
n
(0) cos(
n
t) +
p
n
(0)
m
n

n
sin(
n
t)
+
n

_
t
0
sin(
n
(t s))Q
N
(s) ds.
An integration by parts yields
q
n
(t) = q
n
(0) cos(
n
t) +
p
n
(0)
m
n

n
sin(
n
t) + Q
N
(t)
Q
N
(0) cos(
n
t)
_
t
0
cos(
n
(t s))

Q
N
(s) ds.
We substitute this in equation (11.6) and use the initial conditions (11.5) to obtain the Generalized
Langevin Equation

Q
N
= V
t
(Q
N
)
2
_
t
0
R
N
(t s)

Q
N
(s) ds + F
N
(t), (11.7)
where the memory kernel is
R
N
(t) =
N

n=1
k
n
cos(
n
t) (11.8)
and the noise process is
F
N
(t) =
N

n=1
k
n
(q
n
(0) Q
0
) cos(
n
t) +
k
n
p
n
(0)
m
n

n
sin(
n
t)
=
_

1
N

n=1
_
k
n
(
n
cos(
n
t) +
n
sin(
n
t)) . (11.9)
Remarks 11.2.1. i. The noisy and randomtermare related through the uctuation-dissipation
theorem:
F
N
(t)F
N
(s)) =
1
N

n=1
k
n
_
cos(
n
t) cos(
n
s)
+sin(
n
t) sin(
n
s)
_
=
1
R
N
(t s). (11.10)
ii. The noise F(t) is a mean zero Gaussian process.
215
iii. The choice of the initial conditions (11.5) for q, p is crucial for the form of the GLE and, in
particular, for the uctuation-dissipation theorem (11.10) to be valid.
iv. The parameter measures the strength of the coupling between the Brownian particle and
the heat bath.
v. By choosing the frequencies
n
and spring constants k
n
() of the heat bath particles appro-
priately we can pass to the limit as N + and obtain the GLE with different memory
kernels R(t) and noise processes F(t).
Let a (0, 1), 2b = 1 a and set
n
= N
a

n
where
n

n=1
are i.i.d. with
1
|(0, 1).
Furthermore, we choose the spring constants according to
k
n
=
f
2
(
n
)
N
2b
,
where the function f(
n
) decays sufciently fast at innity. We can rewrite the dissipation and
noise terms in the form
R
N
(t) =
N

n=1
f
2
(
n
) cos(
n
t)
and
F
N
(t) =
N

n=1
f(
n
) (
n
cos(
n
t) +
n
sin(
n
t))

,
where = N
a
/N. Using now properties of Fourier series with random coefcients/frequencies
and of weak convergence of probability measures we can pass to the limit:
R
N
(t) R(t) in L
1
[0, T],
for a.a.
n

n=1
and
F
N
(t) F(t) weakly in C([0, T], R).
The time T > 0 if nite but arbitrary. The limiting kernel and noise satisfy the uctuation-
dissipation theorem (11.10):
F(t)F(s)) =
1
R(t s). (11.11)
Q
N
(t), the solution of (11.7) converges weakly to the solution of the limiting GLE

Q = V
t
(Q)
2
_
t
0
R(t s)

Q(s) ds + F(t). (11.12)
216
The properties of the limiting dissipation and noise are determined by the function f(). As an
example, consider the Lorentzian function
f
2
() =
2/

2
+
2
(11.13)
with > 0. Then
R(t) = e
[t[
.
The noise process F(t) is a mean zero stationary Gaussian process with continuous paths and,
from (11.11), exponential correlation function:
F(t)F(s)) =
1
e
[ts[
.
Hence, F(t) is the stationary Ornstein-Uhlenbeck process:
dF
dt
= F +
_
2
1

dW
dt
, (11.14)
with F(0) ^(0,
1
). The GLE (11.12) becomes

Q = V
t
(Q)
2
_
t
0
e
[ts[

Q(s) ds +
2
F(t), (11.15)
where F(t) is the OU process (11.14). Q(t), the solution of the GLE (11.12), is not a Markov
process, i.e. the future is not statistically independent of the past, when conditioned on the present.
The stochastic process Q(t) has memory. We can turn (11.12) into a Markovian SDE by enlarging
the dimension of state space, i.e. introducing auxiliary variables. We might have to introduce
innitely many variables! For the case of the exponential memory kernel, when the noise is given
by an OU process, it is sufcient to introduce one auxiliary variable. We can rewrite (11.15) as a
system of SDEs:
dQ
dt
= P,
dP
dt
= V
t
(Q) + Z,
dZ
dt
= Z P +
_
2
1
dW
dt
,
where Z(0) ^(0,
1
).
The process Q(t), P(t), Z(t) R
3
is Markovian.
217
It is a degenerate Markov process: noise acts directly only on one of the 3 degrees of freedom.
We can eliminate the auxiliary process Z by taking an appropriate distinguished limit.
Set =

1
, =
2
. Equations (11.17) become
dQ
dt
= P,
dP
dt
= V
t
(Q) +

Z,
dZ
dt
=
1

2
Z

P +
_
2
1

2
dW
dt
.
We can use tools from singular perturbation theory for Markov processes to show that, in the limit
as 0, we have that
1

Z
_
2
1
dW
dt
P.
Thus, in this limit we obtain the Markovian Langevin Equation (R(t) = (t))

Q = V
t
(Q)

Q +
_
2
1
dW
dt
. (11.18)
11.3 Quasi-Markovian Stochastic Processes
In the previous section we studied the gLE for the case where the memory kernel decays expo-
nentially fast. We showed that we can represent the gLE as a Markovian processes by adding
one additional variable, the solution of a linear SDE. A natural question which arises is whether
it is always possible to turn the gLE into a Markovian system by adding a nite number of ad-
ditional variables. This is not always the case. However, there are many applications where the
memory kernel decays sufciently fast so that we can approximate the gLE by a nite dimensional
Markovian system.
We introduce the concept of a quasi-Markovian stochastic process.
Denition 11.3.1. We will say that a stochastic process X
t
is quasi-Markovian if it can be repre-
sented as a Markovian stochastic process by adding a nite number of additional variables: There
exists a stochastic process Y
t
so that X
t
, Y
t
is a Markov process.
In many cases the additional variables Y
t
in terms of solutions to linear SDEs. This is possi-
ble, for example, when the memory kernel consists of a sum of exponential functions, a natural
extension of the case considered in the previous section.
218
Proposition 11.3.2. Consider the generalized Langevin equation

Q = p,

P = V
t
(Q)
_
t
0
R(t s)P(s) ds + F(t) (11.19)
with a memory kernel of the form
R(t) =
n

j=1

j
e

j
[t[
(11.20)
and F(t) being a mean zero stationary Gaussian process and where R(t) and F(t) are related
through the uctuation-dissipation theorem,
F(t)F(s)) =
1
R(t s). (11.21)
Then (11.19) is equivalent to the Markovian SDE

Q = P,

P = V
t
(Q) +
n

j=1

j
u
j
, u
j
=
j
u
j

j
p
j
+
_
2
j

1
, j = 1, . . . n, (11.22)
with u
j
^(0,
1
) and where W
j
(t) are independent standard one dimensional Brownian mo-
tions.
Proof. We solve the equations for u
j
:
u
j
=
j
_
t
0
e

j
(ts)
P(s) ds + e

j
t
u
j
(0) +
_
2
j

1
_
t
0
e

j
(ts)
dW
j
=:
_
t
0
R
j
(t s)P(s) ds +
j
(t).
We substitute this into the equation for P to obtain

P = V
t
(Q) +
n

j=1

j
u
j
= V
t
(Q) +
n

j=1

j
_

_
t
0
R
j
(t s)P(s) ds +
j
(t)
_
= V
t
(Q)
_
t
0
R(t s)P(s) ds + F(t)
where R(t) is given by (11.20) and the noise process F(t) is
F(t) =
n

j=1

j
(t),
219
with
j
(t) being one-dimensional stationary independent OU processes. We readily check that the
uctuation-dissipatione theorem is satised:
F(t)F(s)) =
n

i,j=1

i
(s)
j
(t))
=
n

i,j=1

ij
e

i
[ts[
=
n

i=1

i
e

i
[ts[
= R(t s).
These additional variables are solutions of a linear system of SDEs. This follows from results
in approximation theory. Consider now the case where the memory kernel is a bounded analytic
function. Its Laplace transform

R(s) =
_
+
0
e
st
R(t) dt
can be represented as a continued fraction:

R(s) =

2
1
s +
1
+

2
2
...
,
i
0, (11.23)
Since R(t) is bounded, we have that
lim
s

R(s) = 0.
Consider an approximation R
N
(t) such that the continued fraction representation terminates after
N steps.
R
N
(t) is bounded which implies that
lim
s

R
N
(s) = 0.
The Laplace transform of R
N
(t) is a rational function:

R
N
(s) =

N
j=1
a
j
s
Nj
s
N
+

N
j=1
b
j
s
Nj
, a
j
, b
j
R. (11.24)
This is the Laplace transform of the autocorrelation function of an appropriate linear system of
SDEs. Indeed, set
dx
j
dt
= b
j
x
j
+ x
j+1
+ a
j
dW
j
dt
, j = 1, . . . , N, (11.25)
220
with x
N+1
(t) = 0. The process x
1
(t) is a stationary Gaussian process with autocorrelation function
R
N
(t). For N = 1 and b
1
= , a
1
=
_
2
1
we derive the GLE (11.15) with F(t) being the OU
process (11.14). Consider now the case N = 2 with b
i
=
i
, i = 1, 2 and a
1
= 0, a
2
=
_
2
1

2
.
The GLE becomes

Q = V
t
(Q)
2
_
t
0
R(t s)

Q(s) ds + F
1
(t),

F
1
=
1
F
1
+ F
2
,

F
2
=
2
F
2
+
_
2
1

2

W
2
,
with

1
R(t s) = F
1
(t)F
1
(s)).
We can write (11.27) as a Markovian system for the variables Q, P, Z
1
, Z
2
:

Q = P,

P = V
t
(Q) + Z
1
(t),

Z
1
=
1
Z
1
+ Z
2
,

Z
2
=
2
Z
2
P +
_
2
1

2

W
2
.
Notice that this diffusion process is more degenerate than (11.15): noise acts on fewer degrees
of freedom. It is still, however, hypoelliptic (Hormanders condition is satised): there is sufcient
interaction between the degrees of freedom Q, P, Z
1
, Z
2
so that noise (and hence regularity)
is transferred from the degrees of freedom that are directly forced by noise to the ones that are
not. The corresponding Markov semigroup has nice regularizing properties. There exists a smooth
density. Stochastic processes that can be written as a Markovian process by adding a nite num-
ber of additional variables are called quasimarkovian . Under appropriate assumptions on the
potential V (Q) the solution of the GLE equation is an ergodic process. It is possible to study the
ergodic properties of a quasimarkovian processes by analyzing the spectral properties of the gen-
erator of the corresponding Markov process. This leads to the analysis of the spectral properties of
hypoelliptic operators.
11.3.1 Open Classical Systems
When studying the Kac-Zwanzing model we considered a one dimensional Hamiltonian system
coupled to a nite dimensional Hamiltonian system with random initial conditions (the harmonic
221
heat bath) and then passed to the theromdynamic limit N . We can consider a small Hamil-
tonian system coupled to its environment which we model as an innite dimensional Hamiltonian
system with random initial conditions. We have a coupled particle-eld model. The distinguished
particle (Brownian particle) is described through the Hamiltonian
H
DP
=
1
2
p
2
+ V (q). (11.28)
We will model the environment through a classical linear eld theory (i.e. the wave equation) with
innite energy:

2
t
(t, x) =
2
x
(t, x). (11.29)
The Hamiltonian of this system is
H
HB
(, ) =
_
_
[
x
[
2
+[(x)[
2
_
. (11.30)
(x) denotes the conjugate momentum eld. The initial conditions are distributed according to
the Gibbs measure (which in this case is a Gaussian measure) at inverse temperature , which we
formally write as

= Z
1
e
1(,)
dd. (11.31)
Care has to be taken when dening probability measures in innite dimensions.
Under this assumption on the initial conditions, typical congurations of the heat bath have
innite energy. In this way, the environment can pump enough energy into the system so that
non-trivial uctuations emerge. We will assume linear coupling between the particle and the eld:
H
I
(q, ) = q
_

q
(x)(x) dx. (11.32)
where The function (x) models the coupling between the particle and the eld. This coupling is
inuenced by the dipole coupling approximation from classical electrodynamics. The Hamiltonian
of the particle-eld model is
H(q, p, , ) = H
DP
(p, q) +H(, ) + H
I
(q, ). (11.33)
The corresponding Hamiltonian equations of motion are a coupled system of equations of the
coupled particle eld model. Now we can proceed as in the case of the nite dimensional heat
222
bath. We can integrate the equations motion for the heat bath variables and plug the solution into
the equations for the Brownian particle to obtain the GLE. The nal result is
q = V
t
(q)
_
t
0
R(t s) q(s) + F(t), (11.34)
with appropriate denitions for the memory kernel and the noise, which are related through the
uctuation-dissipation theorem.
11.4 The Mori-Zwanzig Formalism
Consider now the N + 1-dimensional Hamiltonian (particle + heat bath) with random initial con-
ditions. The N + 1 probability distribution function f
N+1
satises the Liouville equation
f
N+1
t
+f
N+1
, H = 0, (11.35)
where H is the full Hamiltonian and , is the Poisson bracket
A, B =
N

j=0
_
A
q
j
B
p
j

B
q
j
A
p
j
_
.
We introduce the Liouville operator
L
N+1
= i, H.
The Liouville equation can be written as
i
f
N+1
t
= L
N+1
f
N+1
. (11.36)
We want to obtain a closed equation for the distribution function of the Brownian particle. We
introduce a projection operator which projects onto the distribution function f of the Brownian
particle:
Pf
N+1
= f, Pf
N+1
= h.
The Liouville equation becomes
i
f
t
= PL(f + h), (11.37a)
i
h
t
= (I P)L(f + h). (11.37b)
223
We integrate the second equation and substitute into the rst equation. We obtain
i
f
t
= PLf i
_
t
0
PLe
i(IP)Ls
(I P)Lf(t s) ds + PLe
i(IP)Lt
h(0). (11.38)
In the Markovian limit (large mass ratio) we obtain the Fokker-Planck equation (??).
11.5 Derivation of the Fokker-Planck and Langevin Equations
11.6 Linear Response Theory
11.7 Discussion and Bibliography
The original papers by Kac et al and by Zwanzig are [26, 95]. See also [25]. The variant of
the Kac-Zwanzig model that we have discussed in this chapter was studied in [37]. An excellent
discussion on the derivation of the Fokker-Planck equation using projection operator techniques
can be found in [66].
Applications of linear response theory to climate modeling can be found in.
11.8 Exercises
224
Index
autocorrelation function, 32
Banach space, 16
Brownian motion
scaling and symmetry properties, 42
central limit theorem, 24
conditional expectation, 18
correlation coefcient, 17
covariance function, 32
Diffusion process
mean rst passage time, 188
Diffusion processes
reversible, 106
Dirichlet form, 109
equation
Fokker-Planck, 88
kinetic, 116
Klein-Kramers-Chandrasekhar, 137
Langevin, 137
Fokker-Planck, 88
Fokker-Planck equation, 126
Fokker-Planck equation
classical solution of, 89
Gaussian stochastic process, 30
generator, 68, 125
Gibbs distribution, 107
Gibbs measure, 109
Green-Kubo formula, 39
inverse temperature, 100
It o formula, 125
Joint probability density, 96
Karhunen-Lo eve Expansion, 45
Karhunen-Lo eve Expansion
for Brownian Motion, 49
kinetic equation, 116
Kolmogorov equation, 126
Langevin equation, 137
law, 13
law of large numbers
strong, 24
Markov Chain Monte Carlo, 111
MCMC, 111
Mean rst passage time, 188
Multiplicative noise, 133
operator
hypoelliptic, 137
Ornstein-Uhlenbeck process
225
Fokker-Planck equation for, 95
partition function, 107
Poincar es inequality
for Gaussian measures, 101
Poincar` es inequality, 109
Quasimarkovian stochastic process, 221
random variable
Gaussian, 17
uncorrelated, 17
Reversible diffusion, 106
spectral density, 35
stationary process, 31
stationary process
second order stationary, 32
strictly stationary, 31
wide sense stationary, 32
stochastic differential equation, 43
Stochastic Process
quasimarkovian, 221
stochastic process
denition, 29
Gaussian, 30
second-order stationary, 32
stationary, 31
equivalent, 30
stochastic processes
strictly stationary, 31
transport coefcient, 39
Wiener process, 40
226
Bibliography
[1] L. Arnold. Stochastic differential equations: theory and applications. Wiley-Interscience
[John Wiley & Sons], New York, 1974. Translated from the German.
[2] R. Balescu. Statistical dynamics. Matter out of equilibrium. Imperial College Press, London,
1997.
[3] A. Barone and G. Paterno. Physics and Applications of the Josephson Effect. Wiley, New
York, 1982.
[4] R. Bartussek, P. Reimann, and P. Hanggi. Precise numerics versus theory for correlation
ratchets. Phys. Rev. Let., 76(7):11661169, 1996.
[5] A. Bensoussan, J.-L. Lions, and G. Papanicolaou. Asymptotic analysis for periodic structures,
volume 5 of Studies in Mathematics and its Applications. North-Holland Publishing Co.,
Amsterdam, 1978.
[6] N. Berglund and B. Gentz. Noise-induced phenomena in slow-fast dynamical systems. Prob-
ability and its Applications (New York). Springer-Verlag London Ltd., London, 2006. A
sample-paths approach.
[7] M. Bier and R.D. Astumian. Biasing Brownian motion in different directions in a 3state
uctuating potential and application for the separation of small particles. Phys. Rev. Let.,
76(22):4277, 1996.
[8] L. Breiman. Probability, volume 7 of Classics in Applied Mathematics. Society for Industrial
and Applied Mathematics (SIAM), Philadelphia, PA, 1992. Corrected reprint of the 1968
original.
227
[9] C. Bustamante, D. Keller, and G. Oster. The physics of molecular motors. Acc. Chem. res.,
34:412420, 2001.
[10] S. Cerrai and M. Freidlin. On the Smoluchowski-Kramers approximation for a system with
an innite number of degrees of freedom. Probab. Theory Related Fields, 135(3):363394,
2006.
[11] S. Cerrai and M. Freidlin. Smoluchowski-Kramers approximation for a general class of
SPDEs. J. Evol. Equ., 6(4):657689, 2006.
[12] S. Chandrasekhar. Stochastic problems in physics and astronomy. Rev. Mod. Phys., 15(1):1
89, Jan 1943.
[13] A.J. Chorin and O.H. Hald. Stochastic tools in mathematics and science, volume 1 of Surveys
and Tutorials in the Applied Mathematical Sciences. Springer, New York, 2006.
[14] I. Derenyi, , and R.D. Astumian. ac separation of particles by biased Brownian motion in a
twodimensional sieve. Phys. Rev. E, 58(6):77817784, 1998.
[15] W. Dietrich, I. Peschel, and W.R. Schneider. Diffusion in periodic potentials. Z. Phys,
27:177187, 1977.
[16] C.R. Doering, L. A. Dontcheva, and M.M. Klosek. Constructive role of noise: fast uctuation
asymptotics of transport in stochastic ratchets. Chaos, 8(3):643649, 1998.
[17] C.R. Doering, W. Horsthemke, and J. Riordan. Nonequilibrium uctuationinduced trans-
port. Phys. Rev. Let., 72(19):29842987, 1994.
[18] N. Wax (editor). Selected Papers on Noise and Stochastic Processes. Dover, New York, 1954.
[19] R. Eichhorn and P. Reimann. Paradoxical directed diffusion due to temperature anisotropies.
Europhys. Lett., 69(4):517523, 2005.
[20] A. Einstein. Investigations on the theory of the Brownian movement. Dover Publications Inc.,
New York, 1956. Edited with notes by R. F urth, Translated by A. D. Cowper.
[21] S.N. Ethier and T.G. Kurtz. Markov processes. Wiley Series in Probability and Mathematical
Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc., New York, 1986.
228
[22] L.C. Evans. Partial Differential Equations. AMS, Providence, Rhode Island, 1998.
[23] W. Feller. An introduction to probability theory and its applications. Vol. I. Third edition.
John Wiley & Sons Inc., New York, 1968.
[24] W. Feller. An introduction to probability theory and its applications. Vol. II. Second edition.
John Wiley & Sons Inc., New York, 1971.
[25] G. W. Ford and M. Kac. On the quantum Langevin equation. J. Statist. Phys., 46(5-6):803
810, 1987.
[26] G. W. Ford, M. Kac, and P. Mazur. Statistical mechanics of assemblies of coupled oscillators.
J. Mathematical Phys., 6:504515, 1965.
[27] M. Freidlin and M. Weber. A remark on random perturbations of the nonlinear pendulum.
Ann. Appl. Probab., 9(3):611628, 1999.
[28] M. I. Freidlin and A. D. Wentzell. Random perturbations of Hamiltonian systems. Mem.
Amer. Math. Soc., 109(523):viii+82, 1994.
[29] M.I. Freidlin and A.D. Wentzell. Random Perturbations of dunamical systems. Springer-
Verlag, New York, 1984.
[30] A. Friedman. Partial differential equations of parabolic type. Prentice-Hall Inc., Englewood
Cliffs, N.J., 1964.
[31] A. Friedman. Stochastic differential equations and applications. Vol. 1. Academic Press
[Harcourt Brace Jovanovich Publishers], New York, 1975. Probability and Mathematical
Statistics, Vol. 28.
[32] A. Friedman. Stochastic differential equations and applications. Vol. 2. Academic Press
[Harcourt Brace Jovanovich Publishers], New York, 1976. Probability and Mathematical
Statistics, Vol. 28.
[33] P. Fulde, L. Pietronero, W. R. Schneider, and S. Str assler. Problem of brownian motion in a
periodic potential. Phys. Rev. Let., 35(26):17761779, 1975.
229
[34] H. Gang, A. Daffertshofer, and H. Haken. Diffusion in periodically forced Brownian particles
moving in spaceperiodic potentials. Phys. Rev. Let., 76(26):48744877, 1996.
[35] C. W. Gardiner. Handbook of stochastic methods. Springer-Verlag, Berlin, second edition,
1985. For physics, chemistry and the natural sciences.
[36] I. I. Gikhman and A. V. Skorokhod. Introduction to the theory of random processes. Dover
Publications Inc., Mineola, NY, 1996.
[37] D. Givon, R. Kupferman, and A.M. Stuart. Extracting macroscopic dynamics: model prob-
lems and algorithms. Nonlinearity, 17(6):R55R127, 2004.
[38] M. Hairer and G. A. Pavliotis. From ballistic to diffusive behavior in periodic potentials. J.
Stat. Phys., 131(1):175202, 2008.
[39] M. Hairer and G.A. Pavliotis. Periodic homogenization for hypoelliptic diffusions. J. Statist.
Phys., 117(1-2):261279, 2004.
[40] P. Hanggi. Escape from a metastable state. J. Stat. Phys., 42(1/2):105140, 1986.
[41] P. Hanggi, P. Talkner, and M. Borkovec. Reaction-rate theory: fty years after Kramers. Rev.
Modern Phys., 62(2):251341, 1990.
[42] W. Horsthemke and R. Lefever. Noise-induced transitions, volume 15 of Springer Series in
Synergetics. Springer-Verlag, Berlin, 1984. Theory and applications in physics, chemistry,
and biology.
[43] J. Jacod and A.N. Shiryaev. Limit theorems for stochastic processes, volume 288 of
Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical
Sciences]. Springer-Verlag, Berlin, 2003.
[44] F. John. Partial differential equations, volume 1 of Applied Mathematical Sciences. Springer-
Verlag, New York, fourth edition, 1991.
[45] S. Karlin and H. M. Taylor. A second course in stochastic processes. Academic Press Inc.
[Harcourt Brace Jovanovich Publishers], New York, 1981.
230
[46] S. Karlin and H.M. Taylor. A rst course in stochastic processes. Academic Press [A sub-
sidiary of Harcourt Brace Jovanovich, Publishers], New York-London, 1975.
[47] C. Kipnis and S. R. S. Varadhan. Central limit theorem for additive functionals of reversible
Markov processes and applications to simple exclusions. Comm. Math. Phys., 104(1):119,
1986.
[48] L. B. Koralov and Y. G. Sinai. Theory of probability and random processes. Universitext.
Springer, Berlin, second edition, 2007.
[49] H. A. Kramers. Brownian motion in a eld of force and the diffusion model of chemical
reactions. Physica, 7:284304, 1940.
[50] N. V. Krylov. Introduction to the theory of diffusion processes, volume 142 of Translations
of Mathematical Monographs. American Mathematical Society, Providence, RI, 1995.
[51] R. Kupferman, G. A. Pavliotis, and A. M. Stuart. It o versus Stratonovich white-noise limits
for systems with inertia and colored multiplicative noise. Phys. Rev. E (3), 70(3):036120, 9,
2004.
[52] A.M. Lacasta, J.M Sancho, A.H. Romero, I.M. Sokolov, and K. Lindenberg. From subdiffu-
sion to superdiffusion of particles on solid surfaces. Phys. Rev. E, 70:051104, 2004.
[53] P. D. Lax. Linear algebra and its applications. Pure and Applied Mathematics (Hoboken).
Wiley-Interscience [John Wiley & Sons], Hoboken, NJ, second edition, 2007.
[54] S. Lifson and J.L. Jackson. On the selfdiffusion of ions in polyelectrolytic solution. J. Chem.
Phys, 36:2410, 1962.
[55] B. Lindner, M. Kostur, and L. Schimansky-Geier. Optimal diffusive transport in a tilted
periodic potential. Fluctuation and Noise Letters, 1(1):R25R39, 2001.
[56] M. Lo` eve. Probability theory. I. Springer-Verlag, New York, fourth edition, 1977. Graduate
Texts in Mathematics, Vol. 45.
[57] M. Lo` eve. Probability theory. II. Springer-Verlag, New York, fourth edition, 1978. Graduate
Texts in Mathematics, Vol. 46.
231
[58] M. C. Mackey. Times arrow. Dover Publications Inc., Mineola, NY, 2003. The origins of
thermodynamic behavior, Reprint of the 1992 original [Springer, New York; MR1140408].
[59] M.C. Mackey, A. Longtin, and A. Lasota. Noise-induced global asymptotic stability. J.
Statist. Phys., 60(5-6):735751, 1990.
[60] M. O. Magnasco. Forced thermal ratchets. Phys. Rev. Let., 71(10):14771481, 1993.
[61] P. Mandl. Analytical treatment of one-dimensional Markov processes. Die Grundlehren der
mathematischen Wissenschaften, Band 151. Academia Publishing House of the Czechoslo-
vak Academy of Sciences, Prague, 1968.
[62] P. A. Markowich and C. Villani. On the trend to equilibrium for the Fokker-Planck equation:
an interplay between physics and functional analysis. Mat. Contemp., 19:129, 2000.
[63] B. J. Matkowsky, Z. Schuss, and E. Ben-Jacob. A singular perturbation approach to Kramers
diffusion problem. SIAM J. Appl. Math., 42(4):835849, 1982.
[64] B. J. Matkowsky, Z. Schuss, and C. Tier. Uniform expansion of the transition rate in Kramers
problem. J. Statist. Phys., 35(3-4):443456, 1984.
[65] J.C. Mattingly and A. M. Stuart. Geometric ergodicity of some hypo-elliptic diffusions for
particle motions. Markov Processes and Related Fields, 8(2):199214, 2002.
[66] R.M. Mazo. Brownian motion, volume 112 of International Series of Monographs on
Physics. Oxford University Press, New York, 2002.
[67] J. Meyer and J. Schr oter. Comments on the Grad procedure for the Fokker-Planck equation.
J. Statist. Phys., 32(1):5369, 1983.
[68] E. Nelson. Dynamical theories of Brownian motion. Princeton University Press, Princeton,
N.J., 1967.
[69] B. ksendal. Stochastic differential equations. Universitext. Springer-Verlag, Berlin, 2003.
[70] G.C. Papanicolaou and S. R. S. Varadhan. Ornstein-Uhlenbeck process in a random potential.
Comm. Pure Appl. Math., 38(6):819834, 1985.
232
[71] G. A. Pavliotis and A. M. Stuart. Analysis of white noise limits for stochastic systems with
two fast relaxation times. Multiscale Model. Simul., 4(1):135 (electronic), 2005.
[72] G. A. Pavliotis and A. M. Stuart. Parameter estimation for multiscale diffusions. J. Stat.
Phys., 127(4):741781, 2007.
[73] G. A. Pavliotis and A. Vogiannou. Diffusive transport in periodic potentials: Underdamped
dynamics. Fluct. Noise Lett., 8(2):L155173, 2008.
[74] G.A. Pavliotis and A.M. Stuart. Multiscale methods, volume 53 of Texts in Applied Mathe-
matics. Springer, New York, 2008. Averaging and homogenization.
[75] G. Da Prato and J. Zabczyk. Stochastic Equations in Innite Dimensions, volume 44 of
Encyclopedia of Mathematics and its Applications. Cambridge University Press, 1992.
[76] H. Qian, Min Qian, and X. Tang. Thermodynamics of the general duffusion process: time
reversibility and entropy production. J. Stat. Phys., 107(5/6):11291141, 2002.
[77] R. L. R. L. Stratonovich. Topics in the theory of random noise. Vol. II. Revised English
edition. Translated from the Russian by Richard A. Silverman. Gordon and Breach Science
Publishers, New York, 1967.
[78] P. Reimann. Brownian motors: noisy transport far from equilibrium. Phys. Rep., 361(2-
4):57265, 2002.
[79] P. Reimann, C. Van den Broeck, H. Linke, P. H anggi, J.M. Rubi, and A. Perez-Madrid. Dif-
fusion in tilted periodic potentials: enhancement, universality and scaling. Phys. Rev. E,
65(3):031104, 2002.
[80] P. Reimann, C. Van den Broeck, H. Linke, J.M. Rubi, and A. Perez-Madrid. Giant accel-
eration of free diffusion by use of tilted periodic potentials. Phys. Rev. Let., 87(1):010602,
2001.
[81] Frigyes Riesz and B ela Sz.-Nagy. Functional analysis. Dover Publications Inc., New York,
1990. Translated from the second French edition by Leo F. Boron, Reprint of the 1955
original.
233
[82] H. Risken. The Fokker-Planck equation, volume 18 of Springer Series in Synergetics.
Springer-Verlag, Berlin, 1989.
[83] H. Rodenhausen. Einsteins relation between diffusion constant and mobility for a diffusion
model. J. Statist. Phys., 55(5-6):10651088, 1989.
[84] J.M Sancho, A.M. Lacasta, K. Lindenberg, I.M. Sokolov, and A.H. Romero. Diffusion on a
solid surface: anomalous is normal. Phys. Rev. Let, 92(25):250601, 2004.
[85] M Schreier, P. Reimann, P. H anggi, and E. Pollak. Giant enhancement of diffusion and
particle selection in rocked periodic potentials. Europhys. Let., 44(4):416422, 1998.
[86] Z. Schuss. Singular perturbation methods in stochastic differential equations of mathematical
physics. SIAM Review, 22(2):119155, 1980.
[87] Ch. Sch utte and W. Huisinga. Biomolecular conformations can be identied as metastable
sets of molecular dynamics. In Handbook of Numerical Analysis (Computational Chemistry),
Vol X, 2003.
[88] C. Schwab and R.A. Todor. Karhunen-Lo` eve approximation of random elds by generalized
fast multipole methods. J. Comput. Phys., 217(1):100122, 2006.
[89] R.B. Sowers. A boundary layer theory for diffusively perturbed transport around a hetero-
clinic cycle. Comm. Pure Appl. Math., 58(1):3084, 2005.
[90] D.W. Stroock. Probability theory, an analytic view. Cambridge University Press, Cambridge,
1993.
[91] G. I. Taylor. Diffusion by continuous movements. London Math. Soc., 20:196, 1921.
[92] T.C.Elston and C.R. Doering. Numerical and analytical studies of nonequilibriumuctuation-
induced transport processes. J. Stat. Phys., 83:359383, 1996.
[93] G. E. Uhlenbeck and L. S. Ornstein. On the theory of the brownian motion. Phys. Rev.,
36(5):823841, Sep 1930.
[94] M. Vergassola and M. Avellaneda. Scalar transport in compressible ow. Phys. D, 106(1-
2):148166, 1997.
234
[95] R. Zwanzig. Nonlinear generalized Langevin equations. J. Stat. Phys., 9(3):215220, 1973.
[96] R. Zwanzig. Nonequilibrium statistical mechanics. Oxford University Press, New York,
2001.
235

You might also like