You are on page 1of 12

Quantitative methods, 8th Lecture: A basic introduction to stochastic processes

Agn`s Tourin e March 1, 2012

Preliminary concepts and examples


E: random experiment (experiment that can be repeated under the same conditions and whose result cannot be predicted with certainty). : sample space (set of possible outcomes for E). T : time. Generally, we will have T = [0, +) (continuous-time process) or T = {0, 1, 2, 3...} (discrete-time process). Stochastic Process: X(t, ) where t T , For xed, {X(t, ), t T } is a sample path. SX : State space (set of values that the s.p. X can take. If SX is nite or countably innite, X is said to be a discrete-state process. If SX is uncountably innite, X is said to be a continuous-state process.

1.1

Some elementary examples

An elementary continuous-time process X(t, ) = tY () where t (0, +) and Y > 0 is a random variable. Since SX = [0, +), it is a continuous-state process. An elementary discrete-time and discrete-state process: the symmetric random walk E : a coin is tossed innitely many times. 1

T = {0, 1, 2, ...}. Let n T denotes the toss number. At each toss, a head or a tail is obtained. We assume that the random walk starts at the origin, i.e. X0 = 0. When a tail is obtained at the (n + 1)th toss: Xn+1 = Xn 1. When a head is obtained at the (n + 1)th toss: Xn+1 = Xn + 1. 1.1.1 To illustrate the random walk

0 dddd

Distribution of a stochastic process

The distribution function of a stochastic process is dened in the following manner. One considers the so-called distribution function of order k of the s.p. X. Given an arbitrary set of k times t1 , t2 , ..., tk , the joint distribution of the random vector (X(t1 ), X(t2 ), ..., X(tk )) is given by F (x1 , ..., xk ; t1 , ..., tk ) = P [X(t1 ) x1 , ..., X(tk ) xk ].

2.1

Probability mass of order k of the discrete-time and discrete-state s.p. X:

In the discrete case, the distribution function of order k is given by the probability mass function P (x1 , ..., xk ; n1 , ..., nk ) = P [Xn1 = x1 , ..., Xnk = xk ].

2.2

Continuous-time and continuous-space s.p.

In the continuous case, when the joint cumulative distribution function is dierentiable two times, the joint density function of order k is given for a set of k times, t1 , t2 , ..., tk by k F (x1 , ..., xk ; t1 , ..., tk ). f (x1 , ..., xk ; t1 , ..., tk ) = x1 ...xk

2.3

First and second moments


E[X(t)], V ar[X(t)] = E[(X(t) E[X(t)])2 ].

More basic concepts


Autocorrelation function: RX (t1 , t2 ) = E[X(t1 )X(t2 )]. Autocovariance function: CX (t1 , t2 ) = E[X(t1 )X(t2 )] E[X(t1 )]E[X(t2 )]. Correlation coecient: X (t1 , t2 ) = Independence: If for all t1 < t2 t3 < t4 , X(t2 ) X(t1 ) and X(t4 ) X(t3 ) are independent, X is said to be a process with independent increments. Stationarity: If for all s 0, X(t2 ) X(t1 ) and X(t2 + s) X(t1 + s) have the same distribution, X is said to be a process with stationary increments. CX (t1 , t2 ) (V ar[X(t1 )]V ar[X(t2 )]) 2
1

3.1

An example: The Bernouilli Process revisited

We recall the Bernouilli trials: One rolls a die independently an innite number of times and dene a success as rolling a six. A Bernouilli process is a sequence of Bernouilli random variables associated with Bernouilli trials X1 , X2 , ..., Xk 1 if kth trial is a success Xk = 0 otherwise The Bernouilli variable Xk has a Bernouilli distribution p(x) = px (1 p)1x , for x = 0, 1 3

with parameter p = 1/6. In addition, its expectation is given by 1 E[Xk ] = probability of success 1 = . 6 We can also compute its autorrelation and autocovariance functions
1 6

if k1 = k2 otherwise
1 6

RX [k1 , k2 ] = E[Xk1 Xk2 ] =

1 62

CX [k1 , k2 ] = E[Xk1 Xk2 ] E[Xk1 ]E[Xk2 ] =

1 62

if k1 = k2 otherwise

One may also dene the stochastic process Yn representing the number of successes after n trials
n

Yn =
k=1

Xk

where Xk are the Bernouilli random variables. The distribution of Yn is Binomial with parameters n and p. By inverting this formula, we also obtain the relationship Xn+1 = Yn+1 Yn . The process Xn+1 = Yn+1 Yn is called one-step increment of the process Yn . Furthermore, since X1 , X2 , ..., Xn are independent random variables, the process Yn has independent increments. Besides, since Yk+n+1 Yk+n = Xk+n+1 and Yn+1 Yn = Xn+1 have the same distribution, the process Yn also has stationary increments. Note, that we are not proving these last two properties rigorously here since we are considering only one-step increments for a sake of simplicity, instead of more general m-step increments. However, the general argument is very similar as the above simple argument and these properties of Yn really do hold!

The Saint Petersburg Paradox

(Daniel Bernouilli, early 1700s)

A fair coin is tossed until the player gets a Head. If this happens at the toss number n, the player is paid 2n dollars by the bank. What is the fair amount to pay to play this game?

4.1

Answer:

Since the probability of obtaining a head for the rst time at the k throw is 1/2k , the expected payo of the game is

(1/2)k 2k = 1 + 1 + 1 + ... = +.
k=1

So, the fair price to pay for playing this game is innite!

Another important example: the Random Walk

Let us go back to the example of the symmetric Random Walk. One tosses a fair coin innitely many times. Since the coin is unbiased, the probability of a tail is equal to the probability of a head,i.e. p = P (H) = P (T ) = 1q = 1/2. The successive outcomes of the tosses are denoted by = 1 2 3 ...n ... where n is the outcome of the toss number n. We dene the one-step increment of the random walk Yi = 1 1 if i = T if i = H

and we dene the random walk by initializing it X0 = 0 and by adding up all the one-step increments:
k

Xk =
i=1

Yi for k = 1, 2, ...

Given a set of integers 0 = k0 < k1 < ... < ki < ki+1 < ... < km , we can further dene the random variables called increments of the random walk
ki+1

Xki+1 Xki =
j=ki +1

Yj .

The increments Xk1 X0 , Xk2 Xk1 ,...,Xki+1 Xki ,...,Xkm Xkm1 are independent. So we can state, that the random walk has independent increments. Clearly, the increments of the random walk are also stationary since the coin tosses are independent. In addition,
ki+1

E[Xki+1 Xki ] =
j=ki +1

E[Yj ] = 0.

ki+1

V ar[Xki+1 Xki ] = E[(


j=ki +1 ki+1 ki+1

Yj )2 ]

= E[
j=ki +1 ki+1

Yj2

+
j=ki +1 k=j ki+1

Yj Yk ]

=
j=ki +1 ki+1

E[Yj2 ]

+
j=ki +1 k=j

E[Yj Yk ]

ki+1

=
j=ki +1

1+
j=ki +1 k=j

= ki+1 ki . The variance of the increment over the time interval [ki , ki+1 ] is equal to ki+1 ki . We also deduce the expectation and variance of the random walk from the last 2 results E[Xk |X0 = 0] = E[Xk X0 |X0 = 0] = E[Xk X0 ] = 0. V ar[Xk ] = V ar[Xk X0 ] = k.

5.1

Representation by a binomial tree


4 2 d
d0 d d-2 d

3 d
d1 d d-1 d d-3 d

d2 d0 d-2 d-4

1 d 0
d d-1 d

5.2

The martingale property of the symmetric random walk

Clearly, the symmetric random walk we dened satises the so-called martingale property E[Xn |X0 = 0] = 0. In general, we have, given the initial point Xk , E[Xk+n |Xk ] = Xk . The interpretation of this property is as follows: the symmetric random walk is expected to perform consistently on average. It does not have a tendency to rise, nor to fall. We will see later, that in the celebrated BlackScholes-Merton model, the present (or discounted) value of the stock price is a martingale.

5.3

The Markovian Property of the random walk

The general idea is that the random walk does not have a memory since at each coin toss, you restart from scratch (each coin toss is independent from the previous one). In other words, the future depends only on the present, not the past. The future value of the random walk is only determined by its 7

present value and the outcomes of the next coin tosses. It does not depend on the whole path that the random walk took, starting at X0 . More generally, a stochastic process X is said to be Markovian if for t1 < t2 , for all x, P {X(t2 ) x|X(t), t t1 } = P {X(t2 ) x|X(t1 )}.

5.4

What if the random walk is not symmetric?

Consider now a random walk which is not necessarily symmetric. The probability of getting a Head is p (0, 1) and the probability of getting a Tail is q = 1p. This corresponds to a biased coin. We can compute the expectation of a one-step increment of this random walk E[Yi ] = p 1 + (1 p) (1) = 2p 1. Next, we can calculate the expectation of the random walk itself, assuming that it starts at X0 = 0
n

E[Xn |X0 = 0] =
i=1

E[Yi ] = n(2p 1).

Note that it is equal to 0 if and only if p = 0.5. Hence, p = 0.5 is the only value of p for which the random walk is a martingale. For other values of p, the random walk is not a martingale! You will see later that the values p = 1/2, q = 1/2 correspond to the so-called risk-neutral probability measure.

A dierent example in continuous-time and continuous space: the Gaussian Processes

We have seen an example of a discrete-time Markov process, namely, the random walk. A Gaussian process is a particular example of a continuoustime Markov process.

6.1

Denition
(X(t1 ), X(t2 ), ..., X(tn ))

A stochastic process X is said to be a Gaussian Process if the vector has a multi-normal distribution for any n and for all t1 , t2 , ..., tn . 8

6.2

Multi-normal distribution

A random vector (X1 , ..., Xn ) has a multi-normal distribution if each random variable Xk can be expressed as a linear combination of independent standard gaussian random variables, i.e.:
j=m

Xk = Xk +
j=1

ckj Zj where Xk R, Zj N (0, 1).

Furthermore, E[Xk ] = Xk , V ar[Xk ] = m c2 , Cov[Xi Xk ] = m cil ckl . l=1 j=1 kj The joint density of the vector (X1 , X2 , ..., Xn ) is completely determined by the vector of means m = (X1 , ..., Xn ) and the covariance matrix K. V ar[X1 ] Cov[X1 , X2 ] ... Cov[X1 , Xn ]

Cov[X2 , X1 ] K= . . . Cov[X1 , Xn ]

V ar[X2 ] . . . ...

... Cov[X2 , Xn ] . . . . ... V ar[Xn ]

If K is nonsingular, one may write fX (x) = 1 1 1 1 T (x mT )}. n 1 exp{ (x m)K 2 (det K) 2 2 2

6.3

More facts on Gaussian processes

Since Cov[Xi , Xj ] = Cov[Xj , Xi ], the matrix K is symmetric. K is also nonnegative denite:


n n

i j Cov[Xi , Xj ] 0 for all i , j R.


i=1 j=1

Any ane transformation of a Gaussian process is a Gaussian process.

7
7.1

More mathematical concepts


More on stationarity

First, we presents two more concepts of stationarity, strict sense stationarity and wide sense stationarity. Since it is often hard to determine whether a given process is sss, the weaker concept of wss turns out to be very useful in practice. A sss process is always also wss. X is stationary (or strict sense stationary, sss in short) if F (x1 , ..., xn ; t1 , ..., tn ) = F (x1 , ..., xn ; t1 + s, ..., tn + s), for all s, n, t1 , tn . X is wide-sense stationary if E[X(t)] is independent of t and RX (t1 , t2 ) depends only on the dierence (t2 t1 ). Next, we apply these concepts to a couple easy examples. X(t) = Y for t 0 where Y is a random variable. X is sss. The Bernouilli process described earlier is not wss because E[Xn ] = np, which is not independent on the time n. The process X = tY where Y is a r.v, is not wss since E[X(t)] = tE[Y ] is not independent of t. 7.1.1 Is the Random Walk stationary?

We need to compute RX (k, k + j) = E[Xk Xk+j ] = 2 E[Xk (Xk+j Xk ) + Xk ] = 2 E[Xk ]E[Xk+j Xk ] + E[Xk ] = 2 0 + E[Xk ] = k. So the random walk is not wss.

10

7.1.2

How about the Gaussian process?

A Gaussian process is not necessarily wss. A well-know Theorem states that a Gaussian process which is wss is also sss. The proof of this result is very straightforward. Proof: RX (t1 , t2 ) = RX (t1 t2 ), mX (t) = m. The joint density is completely dened by m and CX (t1 , t2 ) = RX (t1 t2 ) m2 and hence is invariant with respect to a translation +s.

7.2

Four notions of convergence

Xn X as n +. With probability one (a.s.) P { lim Xn () = X()} = 1.


n+

In Probability (i.p.)
n+

lim P {|Xn () X()| > } = 0 for every > 0.

In mean squares (m.s.): If E{|Xn |2 } < + for all n limn+ E{|Xn X|2 } = 0. In distribution
n+

lim Fn (x) = F (x) at each point of continuity of F.

11

a.s. in prob. m.s. in dis.

12

You might also like