You are on page 1of 13

Chapter 8

Stochastic Differential
Equations

In this chapter, we introduce another approach to diffusion processes, i.e., Markov


processes with continuous path, based on a new theory called stochastic differential
equations. First, let us consider a diffusion process in terms of its forward equation:
f (x, t|y) 1 2
= 2
a(x)f (x, t) b(x)f (x, t), (8.1)
t 2 x x
and consider a very small time t to t + t:
 
1 (x y b(y)t)2
f (x, t + t|y, t) =  exp , (8.2)
2a(y)t 2a(y)t
which can be interpretated as a random walk plus a deterministic drift. Assuming t
is very small, then we can write

x(t + t) = x(t) + b(x)dt + a(x)(Bt+t Bt ). (8.3)
where Bt is the standard Brownian motion. We denote Bt = Bt+t Bt , then:
E [Bt ] = 0. And with a xed t and as a function of 0,
E [Bt Bt+ ] = t , 0 t, and = 0 t. (8.4)
Hence, 
2
E [Bt Bt+ ] d = (t) . (8.5)

8.1 White Noise


Combining what we have from the above, we introduce a derivative of the Brownian
motion
dBt Bt+t Bt
W(t) = = lim . (8.6)
dt t0 t

83
84 CHAPTER 8. STOCHASTIC DIFFERENTIAL EQUATIONS

Note that for each given t, W(t) is a random variable. Its expectation is E[W(t)] = 0,
and variance  
2
(Bt+t Bt ) t
E = . (8.7)
(t)2 (t)2
However,
  
Bt+t Bt Bt +t Bt
dt E
t t
  
Bt Bt
= dt E = 1. (8.8)
(t)2

Thereofor, W(t) has the following key properties:

E[W(t)W(s)] = 0, if t = s,
E[W(t)W(t)] = ,

E[W(t)W(s)]ds = 1.

Therefore we denote

E[W(t)] = 0, E [W(t)W(t + s)] = (s). (8.9)

Even though W(t) does not exist according to rigorous mathematics, it is called white
noise by physicists since a realization of W(t), say w(t), has a correlation function
satises 
1 T
w( )w( + t)d = (t). (8.10)
T 0
Hence, its Fourier transform

w () = w(t)eit dt, (8.11)

has a the power spectrum:


w ()w () (8.12)
independent of due to Fourier transform of a correlation function.

8.1.1 Linear stochastic differential equation


We now consider a new type of differential equation, rst appeared in the work of Paul
Langevin (1872-1946), now widely called stochastic differential equation:
dX
= X + W(t). (8.13)
dt
Since the white noise W(t) is a stochastic process, the solution X(t) is a stochastic
process. In fact, since W(t) is white, the X(t) is Markovian.
8.2. STOCHASTIC DIFFERENTIAL EQUATIONS AND ITO CALCULUS 85

Note that in rigorous mathematical term, the stochastic process W(t) does not exist
since Brownian motion B(t) is not differentiable. Hence, a more rigorous mathemati-
cal way to write the above equation is in its differential form

dX(t) = Xdt + dBt . (8.14)

We can treat W(t) = dBt /dt as a regular function of time. Then for each realization
of the Bt , we have the solution to a linear ordinary differential equation with inhomo-
geneous term. By the method of variation of parameter, we have
 t

X(t) = X0 et + e(tt ) dBt (8.15)
0

which is a linear functional of W (t). Heuristically, the integral is a sum of innite


number of independent random variables, W(t ), t [0, t]. Hence, X(t) is expected
to have a Gaussian distribution due to the central limit theorem. The mean and the
variance of the Gaussian distribution can be determined:
 t

E[X(t)] = X0 et + e(tt ) E[dBt ] = X0 et ,
0

 t t
 
V ar[X(t)] = e(tt ) e(tt ) E[W(t )W(t )]dt dt
0 0
 t t
  1 e2t
= e(tt ) e(tt ) (t t )dt dt =
0 0 2
X(t) is called an Ornstein-Uhlenbeck (OU) process.

8.1.2 White noise as a limit of Orenstein-Uhlenbeck process


We have the solution to the linear stochastic differential equation (8.14). In the limit
of innit long time, the stationary Orenstein-Uhlenbeck (OU) process has a Gaussian
distribution, = 0, 2 = 1/(2), and autocovariance function

1 e2t
g(t) = . (8.16)
2

Therefore, in the limit of 0, we have a white noise process.

8.2 Stochastic Differential Equations and Ito Calculus


The above section, while informatics, is not mathematically very rigorous. In this
section, we shall introduce the idea of stochastic calculus as rst introduced by K. Ito.
First, we need to introduce the idea of mean-equare convergence and limit.
86 CHAPTER 8. STOCHASTIC DIFFERENTIAL EQUATIONS

8.2.1 Stochastic integral


We consider integrals of the following type:
 t
G( )dB( ), (8.17)
t0

where, for easier writting below, B( ) is the same as the B in the previous section,
the standard Brownian motion. Naively, the integral can be understood as the partial
sum
n
Sn = G(i ) [B(ti ) B(ti1 )] , (8.18)
i=1
in which
t0 t1 = t0 + t0 t2 = t1 + t1 tn1 tn = tn1 + tn1 ,
and intermediate points i [ti1 , ti ].
Deterministic G( ) . In this case, the sum in Eq. (8.18) is a sum of many in-
dependent Gaussian randon variables. It is therefore a Gaussian random variable. Its
expected value
 t   t
E G( )dB( ) = G( )E [dB( )] = 0. (8.19)
t0 0
 
2   
t t t
E G( )dB( ) = G2 ( )E (dB )2 = G2 ( )d. (8.20)
t0 0 0

Therefore, the integral in Eq. (8.17), with deterministic G( ), is a Gaussian stochastic


process with mean zero and variance Eq. (8.20). It is completely characterized; It is a
Markov process.
Stochastic G( ) . In this case, we note a very signicant difference of the above
partial sum and the standard Riemann sum in the traditional calculus: The limit of the
partial sum Sn depends on the choice of the intermediate points! To see that, we let
G(t) = B(t) and i = ati + (1 a)ti1 . Then then have

n
Sn = B(i ) {B(ti ) B(ti1 )} (8.21)
i=1
n
E [Sn ] = (i ti1 )
i=1
= a(ti ti1 ) = a(t t0 ). (8.22)
So the expectation of the partial sum Sn can be any value between 0 and (t t0 ),
depending on the choice of a. This is in sharp contrast to the Riemann integration.
K. Itos idea is to choose the intermediate points i always by the ti1 . By this
convention, one denes the Ito stochastic integration of G(t) as the mean-square limit
of a partial sum
 t n
G( )dB( ) = ms lim G(ti1 ) [B(ti ) B(ti1 )] . (8.23)
t0 n
i=1
8.2. STOCHASTIC DIFFERENTIAL EQUATIONS AND ITO CALCULUS 87

8.2.2 Mean-square limit


Let us now see, with the Itos convention, what the mean-square limit of the partial sum
Sn in Eq. (8.21) is.


n
1 2 1 n
B(ti1 ) [B(ti ) B(ti1 )] = B (t) B2 (t0 ) [B(ti ) B(ti1 )]2
i=1
2 2 n=1
1 2 t t0
B (t) B2 (t0 )
ms
.
2 2

We now show that


n
ms lim [B(ti ) B(ti1 )]2 = t t0 . (8.24)
n
i=1

First we see that the expected values of the left-hand-side is indeed the right-hand-side.
Furthermore,
 2

n 
n
E [B(ti ) B(ti1 )]2 (t t0 ) = V ar [B(ti ) B(ti1 )]2
i=1 i=1


n
= V ar[B(ti ) B(ti1 )]2
i=1

n
(ti ti1 )2 0.
n
= 3
i=1

 2
In the derivation we used the fact E[B4t ] = 3 E[B2t ] = 3t2 .
In general, we have
 2
ms lim dBt = dt, (8.25)
t0

and furthermore,
ms lim dB2+n
t = o (dt) , n 0. (8.26)
t0

8.2.3 Nonanticipating functions


Since the above integral dependes on the intermediate points, one can choose an ap-
propriate intermediate point such that the resulting integral is in its simplest form. A
function of time, G(t), explicit or implicit, is said to be nonanticipating if for all the s
and t with s < t, G(t) is statistically independent of Bt Bs . This indicates that the
function G(t) is independent of the behavior of the white noise in the future of t.
Therefore, for the nonanticipating function, the partial sum in Eq. (8.22) is zero.
More importantly, the Ito integration in Eq. (8.23) is independent of dBt .
88 CHAPTER 8. STOCHASTIC DIFFERENTIAL EQUATIONS

8.2.4 Stochastic differential equation and Markov process


We now consider the stochastic differential equation

dX(t) = b(X)dt + A(X)dBt , (8.27)

whose solution is formally dened in terms of Ito integral


 t  t
X(t) = X(t0 ) + b(X(s))ds + A(X(s))dBs . (8.28)
t0 t0

More specically, we have


 t 
n
b(X(s))ds = lim b(X(ti1 ))ti1 (8.29)
t0 ti 0
i=1

where ti = ti+1 ti . And


 
t n
 
A(X(s))dBs = lim A(X(ti1 )) Bti Bti1 . (8.30)
t0 ti 0
i=1

The most important aspect of Ito integration is, as shown in Eqs. (8.29) and (8.30), that
both integrals are completely determined, statistically, by X(t0 ). Therefore, X(t), as a
solution to the SDE (8.27), is Markovian.

8.3 Differentiation and Integration


In traditional calculus, integration and differentiation are inverse operations with re-
spect to each other. Differentiation, however, is easy, while integration is effectively a
process of search the correct function whose differentiation matches the integrand. For
stochastic Ito integral, this is the same. So we would like to develop the differentiation
corresponds to the Ito integration introduced above.
There are two types of integrals in dealing with stochastic differential equations
 t  t
G(X(s), s)ds, and G(X(s), s)dB(s). (8.31)
t0 t0

The rst kind is really a regular sum of random variables. The second kind is new. The
impotance of the Itos convension, i.e., the function G being nonanticipating, is that
it guarantees that the function of random variable X(s), G(X(s), s) is independent of
the dB(t). Therefore,
 t 
E G(X(s), s)dB(s) = 0, always. (8.32)
t0

This means that, by Ito convention, the integral on the right of Eq. (8.31) has a constant
zero mean for all t. This property is known as martingal.
8.4. PARTIAL DIFFERENTIAL EQUATIONS ASSOCAITED WITH SDE 89

8.3.1 Chain rules


Now let us consider a smooth function of a Markov process Xt , Y(t) = (Xt , t),
which is a stochastic process. What will be the SDE which governs the behavior of
Y(t)?

1
dY(t) = x (X(t))dX(t) + t (X(t))dt + xx (X(t))[dX(t)]2
2
1
+xt (X(t))dXdt + tt (X(t))(dt)2 + ... (8.33)
2
(abbreviating x = /x, t = /t, etc.) Note that dX(t) does not have a
classical meaning, it has to understood by according to the Ito calculus (see below).
For example,
a2
deaB(t) = aeaB(t) dB(t) + eaB(t) dt. (8.34)
2

8.4 Partial Differential Equations Assocaited with SDE


8.4.1 Kolmogorov forward equation
One of the most exciting result of the SDE is the derivation of Kolmogorov forward
equation. According to the chain rule, we have innetesimal change in a function of
diffusion process Xt :

1
d(Xt )) = x (Xt )dXt + xx (Xt )(dXt )2
2
1
= x (Xt )b(Xt )dt + x (Xt )A(Xt )dBt + xx (Xt )A2 (Xt )dt
2
for arbitrary function (x). Hence, we take the expection for both sides using pdf
f (x, t) and dividing bt dt, we have
   
d 1
dxf (x, t)(x) = dxf (x, t) x (x)b(x) + xx (x)A2 (x) . (8.35)
dt 2

And since (x) is arbitrary, we have, via integration by parts:

1 2 2
f (x, t) = A (x)f (x, t) b(x)f (x, t). (8.36)
t 2 x2 x
This is the Kolmogoriv forward equation, with A2 (x) = a(x).
Note that in the derivation of Eq. (8.35), we have used the fact

E [x (Xt )A(Xt )dBt ] = 0. (8.37)

This is based on the Ito convention: That x (Xt )A(Xt ) is nonanticipating, since Xt
is independent of dBt .
90 CHAPTER 8. STOCHASTIC DIFFERENTIAL EQUATIONS

8.4.2 Kolmogorov backward equation


Being a Markov process, the transition probability f (x, t|y) of Xt satises the Kolmogorov-
Chapman equation

f (x, t + |z) = dyf (x, t|y)f (y, |z). (8.38)

Let us use Lx to denote the second-order differential operator on the right-hand-side


of the Kolmogorov forward equation (8.36), f (x, t|z)/t = Lx [f ]. Then we have by
carrying derivative with respect to in Eq. (8.38) and integrations by parts, and noting
that f (, t|y) = 0 since it is a probability density function:


f (x, t + |z) = dy f (x, t|y) Ly f (y, |z)


= dy Ly f (x, t|y) f (y, |z),

in which Lx is called the adjoint operator of Lx :

A2 (x) 2
Lx [u]  u(x) + b(x) u(x). (8.39)
2 x2 x
We see that on the left-hand-side

f (x, t + |z) = f (x, t + |z),
t
and on the right-hand-side, in the limit of 0, f (y, |z) = (y z), we have

f (x, t|z) = Lz f (x, t|z). (8.40)
t
This is the Kolmogorov backward equation.

8.4.3 Feymann-Kac and Cameron-Martin-Girsanov formulae


We are now interested in several other quantities associated with a diffusion process,
and partial differential equations they satisfy. First, if a diffusion process X(t), dened
by 8.27, can be terminated at each position x with a rst order rate q(x) 0, then we

call it diffusion with killing, denoted by X(t). It is clear that the total probability of
the process now decreases with time, and it can be written as
 t 
fX (x, t|y) = E y (x X(t)) e 0 q(X(s))ds . (8.41)

where the superscript y in E y [] indicates that the expectation is taken with the all the
X(t) with initial condition X(0) = y.
It is also intuitively obvious that the fX (x, t|y) should satisfy the differntial equa-
tion

f  (x, t|y) = Lx [fX ] q(x)fX . (8.42)
t X
8.4. PARTIAL DIFFERENTIAL EQUATIONS ASSOCAITED WITH SDE 91

To prove this, we note that u(x):



E y [  (x X(t))u(X(t))] = [u(x)f (x, t|y)],
x
2

E y [  (x X(t))u(X(t))] = [u(x)f (x, t|y)],
x2
where f (x, t|y) is the transition probability of X(t), the diffusion process in the ab-
sence of the killing. Using these two identities, and treating fX (x, t|y) as a functional
of X(t) and applying Ito calculus with the chain rule, we have

f  (x, t|y)
t X

dX(t)
= E y
(x X(t))q(X(t))  (x X(t))
dt

 
1  (dX(t))2 t
+ (x X(t)) e 0 q(X(s))ds
2 dt


  A2 (X(t))
= E y
(x X(t))q(X(t)) (x X(t))b(X(t)) + (x X(t))
2
t 
e 0 q(X(s))ds
 
2 A2 (x)
= q(x)fX (x, t|y) b(x)fX (x, t|y) + f  (x, t|y)
x x2 2 X

= (Lx q(x)) fX .


We notice some interesting properties of fX (x, t|y): It satises the Kolmogorov-
Chapmann equation, and also when t 0, fX (x, t|y) = (x y). Afterall, the
diffusion process with killing is still a Markov process. Hence, using the same method
above for deriving the backward equation, we obtain:
 
fX (x, t|y) = Ly q(y) fX (x, t|y). (8.43)
t
From Eq. (8.43), integrate over x, one immediate has
 
(y, t) = E y eq(X(s))ds (8.44)


satises the same Eq. (8.43). This relation between the diffusion with killing, X(t),
and Eq. (8.43) is called the Feymann-Kac formula.
We can obtained similar result for
 t 
(x, t|y) = E y (x X(t))e 0 g(X(s))dBs , (8.45)
t
which involves the Ito integration 0 g(X(s))dBs . Using the same method, one can
show the Cameron-Martin-Girsanov formula
2
= Lx (x, t|y) + (A(x)g(x)) + g 2 (x). (8.46)
t x 2
92 CHAPTER 8. STOCHASTIC DIFFERENTIAL EQUATIONS

8.5 Non-Ito Integration


We consider the Brownian motion
dX(t) = dBt , (8.47)
and the corresponding diffusion equation
fX (x, t) 1 2 fX (x, t)
= . (8.48)
t 2 x2
Introducing a nonlinear transformation, a monotonic function y = g(x), then
 1
= g  (x) , = g . (8.49)
x y y x
For random variable Y = g(X), we have
 1 
fY (y, t) = g  (x) fX (x, t) ,
x=g 1 (y)

then transforming the equation in (8.48) we obtain


 
fY (y, t) 1  
= g g fY (y, t) . (8.50)
t 2 y y
Ito, Stratonovich, and Wong-Zakai integral. According to the Ito convention, we
have the corresponding stochastic differential equation (SDE) for Y(t):
1  2 1
dY(t) = g  (X)dX + g  (X) dX = g  dt + g  dBt , (8.51)
2 2
and the corresponding Ito forward equation
fY (y, t) 1 2    2  1 


= g f Y (y, t) g f Y (y, t) . (8.52)
t 2 y 2 2 y
On the other hand, according to the Stratonovich convention, we have the corre-
sponding SDE for Ys (t):
dYs (t) = g  (X)dX = g  dBt , (8.53)
and the corresponding Stratonovich forward equation is precisely Eq. 8.60. It is also
easy to check that Eq. (8.52) is identical to Eq. (8.60). In fact, the general stochastic
integral of Wong and Zakai has the form of
1 
dYwz (t) = g dt + g  dBt , (8.54)
2
with = 0 and = 1 correspond to Itos and Stratonovichs conventions, respectively.
Then, the corresponding Zakai-Wong PDE

1 
fY (y, t) 1      2 
= g g fY (y, t) g  fY (y, t) . (8.55)
t 2 y y 2 y
Again, Eq. 8.55 is the same as Eq. 8.60. This indicates that noninear variable transfor-
mation of the linear PDE is always consistent, as long as one uses a same convention
of (Ito or non-Ito) integration as the corresponding PDE.
8.5. NON-ITO INTEGRATION 93

8.5.1 Ito and Stratonovich integrals for two-dimensional Brownian


motion
We consider the two-dimensional Brownian motion:
dx = dBx (t), dy = dBy (t). (8.56)
Its Kolmogorov forward equation in the Cartesian coordinate is

fxy (x, y, t) 2 2 fxy 2 fxy


= + . (8.57)
t 2 x2 y 2
A simple curvilinear coordinate transformation, from Cartesian to polar
y
r2 = x2 + y 2 , tan = , r 0, [0, 2], (8.58)
x
yields the gradient of a scalar function u(r, ), divergence of a vector eld v (r, ), and
Laplacian 2 u(r, ) as:

u 1 u
u = , , (8.59a)
r r
1 (rvr ) 1 v
v = + , (8.59b)
r r
r
1 u 1 2u
2 u = r + 2 2. (8.59c)
r r r r
Then the corresponding linear PDE in the polar coordinate is
   
f(r, , t) 2 1 f 1 2 f
= r + 2 2 , (8.60)
t 2 r r r r

in which f(r, , t) = fxy (r cos , r sin , t). In term of the probability density function
fr (r, , t) = rf(r, , t), we have

  2  
fr (r, , t) 2 2 fr 1 2 fr d 2 ln r
= + 2 fr . (8.61)
t 2 r2 r 2 r dr

We see there is a noise-induced force pusing the diffusion toward greater r with
2
potential function 2 ln r! This force is a pure probabilistic effect due to geometry:
The probability in a two dimensional area element fxy (x, y)dxdy, when expressed
in the polar coordinate, becomes f(r, )rdrd = fr, (r, )drd that accounts for the
greater area, thus probability, between [r, r + dr] with equal d.
We now do things a little differently. We consider a Cartesian to polar transforma-
tion, given in Eq. 8.58, for the SDE in (8.56) following Itos calculus. We note
xdx + ydy y 2 (dx)2 2xy(dx)(dy) + x2 (dy)2
dr = + ,
r 2r3
(8.62)
ydx + xdy xy(dx)2 + (y 2 x2 )(dx)(dy) + xy(dy)2
d = + .
r2 r4
94 CHAPTER 8. STOCHASTIC DIFFERENTIAL EQUATIONS

Therefore, the simple stochastic harmonic oscillator with the polar coordinates, accord-
ing to Itos convention, is
2
 

dr = dt +  cos dB1 (t) + sin dB2 (t) ,
2r
  (8.63)
d = sin dB1 (t) + cos dB2 (t) .
r
Let us denote the matrix in (8.68) ,
 
cos sin
(r, ) = . (8.64)
1r sin 1r cos
Then T is a diagonal matrix. Note that in Ito PDE, the matrix T appears as
2

fr (r, , t) 2  2  T  
= fr fr
t 2 r 2r
,=r,

2 2 fr 2 2 fr 2 fr
= + . (8.65)
2 r2 2r2 2 2 r r
As a consistency check, we see that Eqs. (8.65) and (8.61) are the same.
We note that in Eq. (8.68), according to Ito calculus:
 2   2 
E dBr (t) = E dB (t) = 0, E dBr (t) = E dB (t) = dt, (8.66a)

and  
E dBr (t)dB (t) = 0. (8.66b)
Therefore, Eq. (8.68) can be simplied into
2


dr = dt +  dBr (t), d = dB (t). (8.67)
2r r
However, for Stratonovich integration, the equations in (8.66) are not valid. The
Stratonovich SDE is
 
dr =  cos dB1 (t) + sin dB2 (t) ,
  (8.68)
d = sin dB1 (t) + cos dB2 (t) ,
r
and the corresponding linear PDE
 
fr (r, , t) 2  
= (r, ) (r, ) fr (r, , t)
t 2
,,=r,

2    
 fr
= T + fr,
2
,=r, =r,



2 2 fr 1 2 fr fr
= 2
+ 2 . (8.69)
2 r r 2 r r
8.5. NON-ITO INTEGRATION 95

Again, we see this equation is the same as Eqs. (8.65) and (8.61).

You might also like