Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Stochastic Differential Equations and Applications
Stochastic Differential Equations and Applications
Stochastic Differential Equations and Applications
Ebook1,058 pages7 hours

Stochastic Differential Equations and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This text develops the theory of systems of stochastic differential equations, and it presents applications in probability, partial differential equations, and stochastic control problems. Originally published in two volumes, it combines a book of basic theory and selected topics with a book of applications.
The first part explores Markov processes and Brownian motion; the stochastic integral and stochastic differential equations; elliptic and parabolic partial differential equations and their relations to stochastic differential equations; the Cameron-Martin-Girsanov theorem; and asymptotic estimates for solutions. The section concludes with a look at recurrent and transient solutions.
Volume 2 begins with an overview of auxiliary results in partial differential equations, followed by chapters on nonattainability, stability and spiraling of solutions; the Dirichlet problem for degenerate elliptic equations; small random perturbations of dynamical systems; and fundamental solutions of degenerate parabolic equations. Final chapters examine stopping time problems and stochastic games and stochastic differential games. Problems appear at the end of each chapter, and a familiarity with elementary probability is the sole prerequisite.
LanguageEnglish
Release dateAug 28, 2012
ISBN9780486141121
Stochastic Differential Equations and Applications

Related to Stochastic Differential Equations and Applications

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Stochastic Differential Equations and Applications

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Stochastic Differential Equations and Applications - Avner Friedman

    Copyright

    Copyright © 1975, 1976 by Avner Friedman Copyright © Renewed 2003, 2004 by Avner Friedman All rights reserved.

    Bibliographical Note

    This Dover edition, first published in 2006, is an unabridged republication, in one volume, of the work originally published in two volumes by Academic Press, Inc., New York, in 1975 and 1976, respectfully. The Table of Contents have been combined, and the Index originally appearing at the end of Volume 1 has been moved to page 527.

    Library of Congress Cataloging-in-Publication Data

    Friedman, Avner.

    Stochastic differential equations and applications / Avner Friedman.

    p. cm.

    Originally published (in 2 v.) : New York : Academic Press, 1975—1976, in series: Probability and mathematical statistics series ; v. 28.

    Two volumes bound as one.

    Includes index.

    9780486141121

    1. Stochastic differential equations. I. Title.

    QA274.23.F74 2006

    519.2—dc22

    2006047226

    Manufactured in the United States of America Dover Publications, Inc., 31 East 2nd Street, Mineola, N.Y 11501

    Table of Contents

    Title Page

    Copyright Page

    Dedication

    Stochastic Differential Equations and Applications - Volume 1

    1 - Stochastic Processes

    2 - Markov Processes

    3 - Brownian Motion

    4 - The Stochastic Integral

    5 - Stochastic Differential Equations

    6 - Elliptic and Parabolic Partial Differential Equations and Their Relations to Stochastic Differential Equations

    7 - The Cameron—Martin—Girsanov Theorem

    8 - Asymptotic Estimates for Solutions

    9 - Recurrent and Transient Solutions

    Stochastic Differential Equations and Applications - Volume 2

    Preface—Volume 2

    General Notation

    10 - Auxiliary Results in Partial Differential Equations

    11 - Nonattainability

    12 - Stability and Spiraling of Solutions

    13 - The Dirichlet Problem for Degenerate Elliptic Equations

    14 - Small Random Perturbations of Dynamical Systems

    15 - Fundamental Solutions for Degenerate Parabolic Equations

    16 - Stopping Time Problems and Stochastic Games

    17 - Stochastic Differential Games

    Bibliographical Remarks

    References

    Index-Volume 1

    Index-Volume 2

    A CATALOG OF SELECTED DOVER BOOKS IN SCIENCE AND MATHEMATICS

    To the memory of my mother,

    Hanna Friedman

    Stochastic Differential Equations and Applications

    Volume 1

    Preface-Volume 1

    The object of this book is to develop the theory of systems of stochastic differential equations and then give applications in probability, partial differential equations and stochastic control problems.

    In Volume 1 we develop the basic theory of stochastic differential equations and give a few selected topics. Volume 2 will be devoted entirely to applications.

    Chapters 1 – 5 form the basic theory of stochastic differential equations. The material can be found in one form or another also in other texts. Chapter 6 gives connections between solutions of partial differential equations and stochastic differential equations. The material in partial differential equations is essentially self-contained; for some of the proofs the reader is referred to an appropriate text.

    In Chapter 7 Girsanov’s formula is established. This formula is becoming increasingly useful in the theory of stochastic control.

    In Chapters 8 and 9 we study the behavior of sample paths of the solution of a stochastic differential system, as time increases to infinity.

    The book is written in a textlike style, namely, the material is essentially self-contained and problems are given at the end of each chapter. The only prerequisite is elementary probability; more specifically, the reader is assumed to be familiar with concepts such as conditional expectation, independence, and with elementary facts such as the Borel-Cantelli lemma. This prerequisite material can be found in any probability text, such as Breiman [1; Chapters 3,4].

    I would like to thank my colleague, Mark Pinsky, for some helpful conversations.

    General Notation

    All functions are real valued, unless otherwise explicity stated.

    In Chapter n, Section m the formulas and theorems are indexed by (m.k) and m.k respectively. When in Chapter l, we refer to such a formula (m.k) (or Theorem m.k), we designate it by (n.m.k) (or Theorem n.m.k) if l≠n, and by (m.k) (or Theorem m.k) if l = n.

    Similarly, when referring to Section m in the same chapter, we designate the section by m; when referring to Section m of another chapter, say n, we designate the section by n.m.

    Finally, when we refer to conditions (A), (A1), (B) etc., these conditions are usually stated earlier in the same chapter.

    1

    Stochastic Processes

    1. The Kolmogorov construction of a stochastic process

    We write a.s. for almost surely, a.e. for almost everywhere, and a.a. for almost all.

    We shall use the following notation:

    Rn is the real euclidean n-dimensional space;

    is the Borel σ-field of Rn, i.e., the smallest σ-field generated by the open sets of Rn;

    R∞ is the space consisting of all infinite sequences (x1, x2, . . . , xn, . . .) of real numbers;

    is the smallest σ-field of subsets of R∞ containing all k-dimensional rectangles, i.e., all sets of the form

    {(x1, x2 . . .); x1 ∈I1, . . . , x2 ∈Ik, k > 0,

    where I1, . . . , Ik are intervals.

    is also the smallest σ-field containing all sets

    {(x1, x2, . . . ); (x1, ... , xk) ∈ B

    An n-dimensional distribution function Fn(x1, . . . , xn) is a real-valued function defined on Rn and satisfying:

    (i) for any intervals Ik = [ak, bk), 1 ≤ k ≤ n,

    (1.1)

    where

    as k ↑∞ (1 ≤ j n), then

    (iii) if xj ↓ – ∞ for some j, then Fn(x1 ... , xn) ↓ 0, and if xj ↑ ∞ for all j, 1 ≤ j n, then Fn(x1 ... , xn) ↑ 1.

    Unless the contrary is explicitly stated, random variables are always assumed to be real valued, i.e., they do not assume the values + ∞ or – ∞.

    If X1, . . . , Xn are random variables, then X = (X1, ... , Xn) is called an n-dimensional random variable, or a random variable with values in Rn. The function

    (1.2)

    is an n-dimensional distribution. Conversely, for any n-dimensional distribution Fn there is a probability space and a random variable X with values in Rn such that its distribution function is Fn, i.e., (1.2) holds; see, for instance, Breiman [1].

    A sequence {Fn(x1, . . . , xn)} of distribution functions is said to be consistent if

    (1.3)

    , P) be a probability space, and let {Xn} be a sequence of random variables. We call such a sequence a discrete time stochastic process, or a countable stochastic process, or a stochastic sequence. The distribution functions of (X1, . . . , Xn), n ≥ 1, form a consistent sequence. The converse is also true, namely:

    Theorem 1.1. Let {Fn} be a sequence of n-dimensional distribution functions satisfying the consistency condition (1.3). Then there exists a probability space , P) and a countable stochastic process {Xn} such that

    P(X1 < x1, .. , Xn < xn) = F(x1, . . . ,xn), n ≥ 1.

    ,

    P{(x1, x2, . . .); x1 < y1, . . . . , xn < Yn) = Fn(y1 . . . , yn)

    and more generally

    P{(x1, x2, . . . ); a1 ≤ x1 < b1, ... , an xn < bn}

    is given by the left-hand side of (1.1), and

    Xn((x1, x2, . . . )) = xn.

    The consistency condition guarantees that P is well defined on all rectangles. The main nontrivial point here is the fact that P ), and this follows from the Kolmogorov extension theorem:

    If P is defined on all finite-dimensional rectangles, and:

    (i) P ≥ 0, P(R∞) = 1;

    (ii) if Sj . . . , Sm are disjoint n-rectangles and S = SSm then P(S;

    (iii) if {Sj} is a nondecreasing sequence of n-dimensional rectangles and Sj S as j ↑ ∞, then limj→∞P(Sj) = P(S);

    then there is a unique extension of P into a probability on (R ).

    For proof, see Breiman [1].

    A stochastic process is a family of random variables {X(t, P), where t varies in a real interval I (I is open, closed, or half-closed). We denote it also by {X(t), tI}, {X(t)} (t∈I) or X(t), t∈I.

    A stochastic process defines a set of distribution functions

    where t< tn, tj I, n = 1, 2, ... ; they are called the finite-dimensional joint distribution functions of the process.

    A set of distribution functions

    (1.5)

    defined for all t< tn, tj I, 1≤ j n, n = 1, 2, ... , is said to be consistent if

    The set given by (1.4) is consistent. We shall now prove the converse.

    Theorem 1.2. Given a consistent set of distribution functions as in (1.5), there exists a stochastic process {X(t),tI} defined on a probability space , P) such that the given distribution functions are the joint distribution functions of the process, i.e., such that (1.4) holds.

    Proof. Denote by RI the set of all real valued functions x) on I. A set of the form

    (1.6)

    is called a cylinder set with countable base {tjthe class of all cylinder sets with countable base. This class is clearly a σ-field, and it coincides with the σ-field generated by all the finite-dimensional rectangles in RI,

    , where I1, . . . , In are intervals. We take Ω = RI. Given a sequence T = {tj} in I, the class of all sets

    {x); (x(t1), x(t2), . . .) ∈ D.

    By Theorem 1.1 and the remark following it, there exists a unique probability measure PT ) such that

    (1.7)

    Set P(B) = PT(B) if B . For this definition to make sense we must show that if B where Tthen

    (1.8)

    Setting S = T T′ we have T S, . Since PT = PS on all finite-dimensional rectangles with base in T, PT = PS . Thus PT(B) = PS(B). Similarly, PT,(B) = PS(B), and (1.8) follows.

    , P), we now take

    X(t, x)) = x(t);

    (1.4) then immediately follows.

    We refer to the stochastic process constructed in this proof as the process obtained by the Kolmogorov construction.

    Definition. A ν-dimensional countable stochastic process (or a ν-dimensional stochastic sequence) is a sequence {Xn} of ν-dimensional random variables. A family {X(t), tI) of ν-dimensional random variables is called a ν-dimensional stochastic process.

    A ν-dimensional stochastic sequence {Xn} gives rise to functions

    (1.9)

    and a < b means that aj < bj for 1 ≤ j ν, where a = (a1, ... , ), b = (b1, . . . , bν). Setting

    (1.10)

    where

    it is clear that the sequence {Gi(x1, ... , xi)} is a consistent sequence of distribution functions.

    Definition. is called an n-dimensional distribution function with respect to Rν if:

    (i) condition (1.1) holds with ak, bk k = (ak1, ... , akν= (bk1, . . . , bkνk , and xj ;

    (ii)

    for 1 ≤ k n (i.e., (ii) for 1 ≤ k n );

    ↑ 1. Set

    (1.11)

    where the first k agree, whereas the last ν k are equal, respectively, to ∞ and k. Then, obviously, the set of distribution functions G(n—1)ν+1, . . . , Gnν defined by (1.10) is consistent.

    A sequence of distributions {Fn} with respect to is said to be consistent if

    increases to ∞. This is clearly the case if and only if the corresponding sequence of distributions Gi(x1, ... , xi), defined by (1.10), (1.11), is consistent.

    If we apply Theorem 1.1 to the latter sequence, we conclude that for any consistent sequence of distribution functions, with respect to , there exists a countable ν-dimensional stochastic process {Xn} whose distribution functions are the Fn, P) can be taken such that Ω = the smallest σ-field containing all finite-dimensional rectangles

    , . . . }

    j is an interval in Rν.

    where t< tn, ti I, n = 1, 2, ... , is said to be consistent if

    Theorem 1.3. Given a consistent set of distribution functions with respect to ; t< tn, ti, ∈ I}, there exists probability space , P) and a ν-dimensional stochastic process X(t) (t I) defined on it such that

    (1.12)

    are called the finite-dimensional joint distribution functions (with respect to Rn) of the process X(t).

    The proof is similar to the proof of Theorem 1.2; it is based on the extension of Theorem 1.1 to distribution functions with respect to Rν, mentioned above. Here Ω is the space Rν, I of all functions x) from I into consists of all sets (1.6) with D . The construction of the process X(t) in this proof is called the Kolmogorov construction.

    2. Separable and continuous processes

    Definition. A ν-dimensional stochastic process {X (t), tI} is called separable if there exists a countable sequence T = {tj} that is a dense subset of I and a subset N of Ω with P(N) = 0 such that, if ω∉N,

    for any open subset J of I and for any closed subset F of R ν. T is called a set of separability, or briefly, a separant.

    Two ν-dimensional stochastic processes {X(t)} and {X′(t)} (t∈I) defined on the same probability space are said to be stochastically equivalent if

    P[X(t) ≠ X’(t)] = 0 for all tI.

    We then say that (X′(t)} is a version of {X(t)}.

    Theorem 2.1. Any ν-dimensional process is stochastically equivalent to a separable stochastic process.

    The random variables of the separable process may actually take the values ± ∞, even though the random variables of the original process are real valued.

    For proof the reader is referred to Doob [1]. Theorem 2.1 will not be used in the text of this book.

    From Theorem 2.1 it follows that when one constructs a stochastic process from a given set of distribution functions (as in the Kolmogorov construction), one may always take this process to be separable.

    Let X(t) be a v-dimensional stochastic process for tI. Thus, for each t, X(t) is a random variable X(t, ω). The functions t→X(t, ω) (ω fixed) are called the sample paths of the process. If for a.a. ω the sample paths are continuous functions for all t∈I, then we say that the stochastic process is continuous. If for a.a. ω the sample paths are right (left) continuous, then we say that the stochastic process is right (left) continuous.

    It is easy to see that a right (or left) continuous stochastic process is separable and any dense sequence in I is a set of separability.

    A stochastic process {X(t), t∈I} is said to be continuous in probability if for any sI and ∈ > 0,

    P[|X(t) — X(s]→0 if tI, ts.

    The following theorem is due to Kolmogorov.

    Theorem 2.2. Let X(t) (tI) be a ν-dimensional stochastic process such that

    (2.1)

    for some positive constants C, α, β. Then there is a version of X(t) that is a continuous process.

    Proof. Take for simplicity I = [0, 1]. Let 0 < γ < α/β and let δ be a positive number such that

    (2.2)

    Then, by (2.1), P{|X(j2 – n) – X(i2 – n)| >[(j i)2 – n]y

    for some 0 ≤ i j≤ 2n, j i < 2}

    where μ > 0 by (2.2), and C1, C2 – < ∞, the Borel-Cantelli lemma implies that for a.a. ω

    (2.3)

    where h(t) = and m(ω) is some sufficiently large positive integer.

    Let t1, t2 be any rational numbers in the interval (0, 1) such that t1 < t2, t = t2 – t1 < 2 – m(1 – δ) where m = m(ω) is as in (2.3). Choose n m such that

    2 – (n+1)(n – δ) ≤ t < 2 – n(1 – δ).

    We can expand t1, t2 in the form

    t1 = i2 – n – 2 – p¹ – . . . – 2 – pk,

    t2 = j2 – n + 2 – q¹ + . . . + 2 – qt

    where n < p1 < · · · < pk, n < q1 < · · · < ql. Then t1 ≤ i2 – n j2 – n t2 and j i t2n < 2. By (2.3),

    Hence

    Similarly,

    |X(t2) – X(j2 – n)| ≤ C3h(2 – n).

    Finally, by (2.3),

    |X(j2 – n) – X(i2 – n)| ≤ h((j i)2 – n) ≤ h(t).

    Hence

    (2.4)

    Let T = {tj} be the sequence of all rational numbers in the interval (0, 1). From (2.4) it follows that

    (2.5)

    Thus X(t, ω), when restricted to t T be its unique continuous extension to 0 ≤ t is a version of X(t).

    Let t* ∈ [0, 1] and let tj, ∈ T, tj t* if jis a continuous process,

    a.s. as j′ → ∞.

    From (2.1) it also follows that

    X(tj′, ω) → X(t*, ω) in probability, as j′ → ∞.

    = X(tj′, ω= X(t*, ω) a.s.

    From (2.5) we see that

    if t s < 2 – m(1 – δ), m = m(ω). Thus:

    Corollary 2.3. Let X(t) (t ∈ I) be a ν-dimensional process satisfying (2.1). Then for any bounded subinterval J I and for any ∈ > 0, a.a. sample paths of the continuous version of X(t) satisfy a Hölder condition with exponent (α / β) – on J.

    3. Martingales and stopping times

    Definition. A stochastic sequence {Xn} is called a martingale if E|Xn| < ∞ for all n, and

    E(Xn+1|X1, . . . , Xn) = Xn a.s. (n = 1, 2, . . .).

    It is a submartingale if E|Xn| < ∞ for all n, and

    E(Xn+1|X1, ..., Xn) ≥ Xn a.s. (n = 1, 2, . . .).

    Theorem 3.1. If {Xn} is a submartingale, then

    (3.1)

    This inequality is called the martingale inequality.

    Proof. Let

    . Therefore

    (3.2)

    where XB is the indicator function of a set B. Since Ak belongs to the σ-field of X1, . . ., Xk,

    Comparing this with (3.2), the assertion (3.1) follows.

    Corollary 3.2. If {Xn} is a martingale and E|Xn|α < ∞ for some α 1 and all n ≥ 1, then

    This follows from Theorem 3.1 and the fact that {|Xn|α} is a submartingale (see Problem 8).

    Definition. A stochastic process {X(t), t I} is called a martingale if E|X(t)| < ∞ for all t ∈ I, and

    E{X(t)|X(τ), τ s, τ I} = X(s) for all t I, s I, s t.

    It is called a submartingale if E|X(t)| < ∞ and

    E {X(t)|X(τ), τ ≤ s, τ I} ≥ X(s) for all t I, s I, s t.

    Corollary 3.3. (i) If {X(t)} (t ∈ I) is a separable submartingale, then

    (ii) If {X(t)} (t I) is a separable martingale and if E|X(t)|α < ∞ for some α ≥ 1 and all t I, then

    This follows by applying Theorem 3.1 and Corollary 3.2 with Xj = X(sj) if 1 ≤ j n – 1, Xj = X(t) if j n, where s1 < < ··· < sn – 1 < t, taking the set {s1, ···, sn – 1} to increase to the set of all tj with tj < t, where {tj} is a set of separability (cf. Problem 1(b)).

    , λ ∈ I) the σ-field generated by the random variables X(λ), λ I.

    Definition. Let X(t), α t β, be a ν-dimensional stochastic process. (If β = ∞, we take α t < ∞.) A (finite-valued) random variable τ is called a stopping time with respect to the process X(t) if α τ β and if, for any α t β, {τ t, α ≤ λ ≤ t). If β = ∞ and τ may also assume the value + ∞, then we call τ an extended stopping time.

    Notice that the sets {τ > t}, {s < τ t} (where s < t, α ≤ λ t).

    Lemma 3.4. If τ is a stopping time with respect to a ν-dimensional process X(t), t ≥ 0, then there exists a sequence of stopping times τn such that

    (i) τn has a discrete range,

    (ii) τn τ everywhere,

    (iii) τn τ everywhere as n ↑ ∞.

    Proof. Define τn = 0 if τ = 0 and

    Then clearly (i) – (iii) hold. Finally, τn is a stopping time since

    If X(t) is right continuous, then

    X(t Λ τn) → X(t Λ τ) a.s. as n → ∞.

    Since X(t Λ τn) is obviously a random variable, the same is true of X(t Λ τ). We shall prove:

    Theorem 3.5. If τ is a stopping time with respect to a right continuous martingale X(t), t ≥ 0, then the process Y(t) = X(t Λ τ), t ≥ 0, is also a right continuous martingale.

    Before giving the proof, we need some preliminaries.

    Definition. A sequence of random variables Xn is said to be uniformly integrable if E|Xn| < ∞ for any n ≥ 1, and

    (3.3)

    Lemma 3.6. If {Xn} is uniformly integrable, then . If further Xn X a.s. or in probability, then E|X Xn| → 0 and EXn EX.

    Proof. For any λ > 0,

    From this and (3.3) it follows that E|Xn| ≤ C, where C is a constant independent of n. To prove the last assertion, notice, by Fatou’s lemma, that if Xn X a.s., then

    Next, by Egoroff’s theorem, Xn X almost uniformly, i.e., for any ∈ > 0 there is an event A with P(A) < ∈ such that Xn X uniformly on Ω\A. Hence

    Now, by (3.3),

    where η(λ) → 0 if λ → ∞, and η(λ) does not depend on n. By Fatou’s lemma, the same inequality holds for X. Taking ∈ ↓ 0, we get

    If Xn X in probability, then any subsequence Xn′ of Xn has a subsub-sequence Xn″ which converges a.s. to X. By what we have proved, E|Xn″ X| → 0 if n″ → ∞. But this implies that E|Xn X| → 0 as n → ∞.

    Proof of Theorem 3.5. Let 0 ≤ t1 < t2 < ··· < tm—1 < tm = s < t,

    (3.4)

    and let τn be the stopping times constructed in the proof of Lemma 3.4. Assume first that there is an integer j such that s = (j + 1)/n. On the set τ ≤ (j + 1)/n, τn ≤ (j + 1)/n so that

    X(s Λ τn) = X(τn) = X(t Λ τn).

    Consequently

    (3.5)

    Next,

    (3.6)

    where (M + 1)/n < t (M + 2)/n.

    Notice that

    , λ ≤ (M + 1)/n), we can use the martingale property of X(t) to deduce that

    Hence, from (3.6),

    Proceeding in this manner step by step, we arrive at

    since s = (j + 1)/n,

    , λ ≤ (j + 1)/n), and s Λ τn = s on this set.

    Recalling (3.5) we conclude that

    (3.7)

    So far we have assumed that the number s is in the range of τn. If this is not the case, i.e., if j/n < s < (j + 1)/n for some j, then we modify the definition of τn, by taking

    The assertion (3.7) then follows as before, and the τn satisfy the properties asserted in Lemma 3.4.

    We shall now complete the proof of the theorem in case τ is bounded, i.e., τ N < ∞ (N constant).

    The range {sin} of τn lies in [0, N + 1).

    Notice now that |X(t)| is a submartingale. If we apply the proof of (3.7) to |X(t)|, instead of X(t), we obtain

    ∫A|X(s Λ τn)|dP ≤ ∫A|X(t Λ τn)| dP

    whenever t > s. Taking, in particular,

    τ = τn, A = {|X(s Λ τn)| >λ}, t = N + 1,

    we get

    But since |X(t)| is a submartingale,

    Summing over i,

    (3.8)

    Now, since X(t) is right continuous and τ is bounded, the random variable

    is bounded in probability. This implies that the right-hand side of (3.8) converges to 0 if λ → ∞, uniformly with respect to n. Thus the sequence {X(s Λ τn)} is uniformly integrable. Similarly, one shows that the sequence {X(t Λ τn)} is uniformly integrable. By Lemma 3.6 it follows that E|X(t Λ τ)| < ∞.

    If we now take n ↑ ∞ in (3.7) and apply Lemma 3.6, we get

    (3.9)

    Since this is true for any set A of the form (3.4), {X(t Λ τ)} is a martingale.

    We have thus completed the proof in case τ is bounded. If τ is unbounded, then τ Λ N is a bounded stopping time. Hence E|X(t Λ τ)| < ∞ if N > t. Further, (3.9) holds with τ replaced by τ Λ N, and taking N > t the relation (3.9) follows. This completes the proof.

    Remark. Let {X(t), 0 ≤ t T(0 ≤ t ≤ T) be an increasing family of σ, 0 ≤ λ t) . If E|X(t)| < ∞ and

    = X(s) for all 0 ≤ s t T,

    then we say that X (t) is a martingale with respect to . ≤ t T). This clearly implies that {X(t), 0 ≤ t T} is a martingale. The proof of Theorem 3.5 shows that if X(t(0 ≤ t T), then the process X(t Λ τ; here τ is any random variable such that τ ≥ 0 and {τ tfor all 0 ≤ t T.

    PROBLEMS

    1. Let X(t), t ≥ 0 be a stochastic process and let {tj} be a set of separability. Prove that for a.a. ω:

    (a)

    (t > 0).

    (b)

    (Notice that, as a consequence, supa<t<bX(t) and infa<t<bX(t) are finite- or infinite-valued random variables.)

    (c) supatb|X(t)|, lim suptsX(t), and lim inftsX(t) are finite- or infinite-valued random variables.

    (d) For any positive numbers ∈, δ,

    if and only if

    2. Let X(t), X′(t) (t ≥ 0) be ν, P, P′) respectively, having the same finite-dimensional distribution functions. Then for any sequence {tn}, tn ,

    P{(X(t1), X(t2), . . . ) ∈ D} = P′{(X’(t1), X′(t2), . . . ) ∈ D}.

    3. Suppose two ν-dimensional stochastic processes X(t) and X′(t) (t , P) , P′), respectively, are separable and have the same finite-dimensional distribution functions. If X(t) is continuous, then X′(t) is continuous.

    [Hint: Let ν = 1. It suffices to prove continuity for 0 ≤ t t*, t* < ∞. Let T = {tj} be a set of separability for both processes for 0 ≤ t t*. The continuity of X(t) implies that

    By Problem 2, with

    Now use Problem 1(d).]

    4. Let X(t) and X′(t) (t ≥ 0) be ν, P, P′) respectively, and having the same finite-dimensional distribution functions. Then:

    P{|X(t)| → ∞ as t → ∞} = P′{|X‘(t)| → ∞ as t → ∞};

    P{|X(t)|/M as t→∞} = P′{|X′(t)|/tα → M as t → ∞),

    for any α > 0, M ≥ 0.

    [Hint: {|X(t)| → ∞ as t .]

    5. If two stochastic processes are stochastically equivalent, then they have the same finite-dimensional distribution functions.

    6. Let Xn, n ≥ 1 be independent random variables with E|Xn| < ∞, EXn = 0. Then the sequence of partial sums X1 + ··· + Xn is a martingale.

    7. Let X , P) with E|X| < ∞, and let φ(x) be a convex function defined on the real line such that E|φ(X)| < ∞. Prove:

    φ(EX) ≤ (X) (Jensen’s inequality);

    φ(E(X| )) ≤ E(φ(Xis any σ.

    8. If {Xn} is a martingale and φ(x) is a convex function on the real line, and if E|φ(Xn)| < ∞ for all n, then {φ(Xn)} is a submartingale.

    9. If {Xn} is a submartingale and φ(x) is a convex function and monotone nondecreasing on the range of the Xn, and if E|φ(Xn)| < ∞ for all n, then {φ(Xn)} is a submartingale.

    10. If τ is a bounded stopping time with respect to right continuous martingale X(t), then EX(τ) = EX(0).

    11. Prove that a continuous process is continuous in probability. Give an example showing that the converse is not true in general.

    12. Let X(t), (t ∈ I) be two stochastically equivalent and separable processes. Show that if one of them is continuous, then the other is also continuous, and

    P{X(t) for all t∈I} = 1.

    13. A stochastic process {X(t), t∈ I} is said to be measurable if the function (t, ω)→X(t, ω) (from the product measure space into R ¹) is measurable. Show that a continuous stochastic process is measurable.

    2

    Markov Processes

    1. Construction of Markov processes

    If x(t) (x(λ), X∈J) the smallest σ-field generated by the random variables x(t), t∈J.

    Definition. Let p(s, x, t, A) be a nonnegative function defined for 0 ≤ s < t < ∞, x∈Rν, Aν and satisfying:

    (i) p(s, x, t, A) is Borel measurable in x, for fixed s, t, A;

    (ii) p(s, x, t, A) is a probability measure in A, for fixed s, x, t;

    (iii) p satisfies the Chapman-Kolmogorov equation

    (1.1)

    Then we call p a Markov transition function, a transition probability function, or a transition probability.

    Theorem 1.1. Let p be a transition probability function. Then, for any s ≥ 0 and for any probability distribution π(dx) on (Rνν), there exists a ν-dimensional stochastic process {x(t), s ≤ t < ∞} such that:

    (1.2)

    (1.3)

    Proof. Let M[[0, ∞); ] be the space of all functions ω from [0, ∞) into be the smallest σ-field with respect to which all functions ω(u) are measurable, s ≤ u tis the σ-field generated by all the sets {ω; ω(u)∈A} where s ≤ u .

    be the smallest σ-field with respect to which all functions ω(u) are measurable, s ≤ u < ∞. We takes Ω = M[[0, ∞(); .

    We next define a family of finite-dimensional distribution functions with respect to : if s = to < t1 < · · · < tn , (0 ≤ i ≤ n), then

    (1.4)

    It is easy to verify that the Fn are in fact n-dimensional distribution functions with respect to Rν. Using the Chapman-Kolmogorov equation it follows that

    thus the family of distribution functions in (1.4) is consistent.

    By the Kolmogorov construction, there exists a probability Ps ) and a ν-dimensional stochastic process x(t), s ≤ t .

    Notice that x(t, ω) = ω(t).

    Now, if s = to < t1 < · · · < tn,

    (1.5)

    provided each Bi ). It is easily seen that both sides of (1.5) coincide also when the Bi , it follows that (1.5) holds for all B.

    , s ≤ λ ≤ s), it remains to prove that

    (1.6)

    If we prove (1.6) for any set D of the form

    (1.7)

    where s = to < t1 < · · · < tn , then, by the countable additivity of both sides of (1.6), the assertion (1.6) will follow for any D ∈ (x(λ), s ≤ λ ≤ s).

    Set tn+1 = t, Bn+1 = A. When D is given by (1.7), the left-hand side of (1.6) becomes

    Therefore it suffices to show that

    (1.8)

    where f is a Borel measurable function. It clearly suffices to prove (1.8) when f is any indicator function χF= Bn ∩ F, (1.8) becomes

    (1.9)

    Since the right-hand side is equal to

    (1.9) is a consequence of (1.5). This completes the proof of (1.3). For any x E Rn let πx be defined by

    Denote by Px,s the probability Ps ) when π = πx, and denote the expectation by Ex, sThen we have the following situation: Fx, s.

    if s′ ≤ s, t ≤ t′, .

    (b) There is a function x(t, ω) from [0, (∞) × Ω into such that x(t, ωmeasurable for each to s.

    (c) For each x ∈ Rν, s ≥ 0 there is a probability Px, s ) such that

    (1.10)

    (1.11)

    Definition, x(t), Px, s} satisfying (a), (b), (c), where p is a transition probability function, is called a (ν-dimensional) Markou process corresponding to p.

    The property (1.11) is called the Markov property.

    is the smallest σ-field generated by x(u), s ≤ u t. Notice also that x(t, ω) = ω(t).

    The model constructed in Theorem 1.1 will be called the Kolmogorou model.

    Taking the expectation Ex,s of both sides of (1.11), we get (when t = s and t + h is denoted by t),

    (1.12)

    if f = χA. By approximation, this relation holds also for any bounded Borel measurable function f.

    We can therefore write (1.11) in the form

    (1.13)

    if f = χA. By approximation, this relation holds also for any bounded Borel measurable function f.

    We have the following generalization of (1.5) (in case π = πx):

    (1.14)

    where D is the σ-field of Borel sets in × × · · · × (n times)). Indeed, if we denote the left-hand and right-hand sides of (1.14) by P(D) and Q(D) respectively, then P(D) and Q(D) .

    Theorem 1.2. If , x(t), Ps, x} is a ν-dimensional Markov process, then

    for any

    (1.15)

    Proof. Let 0 < h1 < h2 and let f1, f2 be bounded Borel measurable functions in Rν. Using (1.13), we have

    Similarly one can prove by induction on n that, if 0 < h1 < · · · < hn,

    (1.16)

    For any set D its indicator function f (xi, · · · , xn) = χD(x1,· · · , xn) can be uniformly approximated by finite linear combinations of bounded measurable functions of the form f1(x) · · · fn(xn) (each xi varies in ). Employing (1.16) we

    Enjoying the preview?
    Page 1 of 1