You are on page 1of 168

620-332 Integral Transforms

& Asymptotics
Semester 2, 2008
Paul A. Pearce
Department of Mathematics and Statistics
University of Melbourne
Subject Web Page:
http://www.ms.unimelb.edu.au/s620332
Oce: Room 191, Mathematical Sciences Building
Email: P.Pearce@ms.unimelb.edu.au
HomePage: http://www.ms.unimelb.edu.au/pap
c 20062008 Materials prepared by Colin Thompson, Richard Brak and Paul A. Pearce
0-1
Course Information
Lectures: This course consists of 36 one hour lectures (three per week) and 11 practice
class hours (one per week). The lectures are in the Russell Love Theatre, Richard Berry
Building on Tuesdays and Thursdays at 9.00 a.m. and in Theatre 1, Old Geology on Fridays
at 1.00 p.m. The practice class is in the Russell Love Theatre, Richard Berry Building on
Wednesdays at 2.15 p.m.
Prerequisites: The prerequisites for this subject are Complex Analysis and Mathematical
Methods:
One of 620-221 or 620-252
One of 620-232 or 620-234
Assessment: A 45-minute written test held mid-semester (either 0% or 20%); a 3-hour
written examination in the examination period (80% or 100%). The relative weighting of
the examination and the mid-semester test will be chosen so as to maximise the nal mark.
0-2
Course Materials
Recommended Textbooks:
Murray R. Spiegel, Theory and Problems of Complex Variables, Schaum Outline Series.
Murray R. Spiegel, Theory and Problems of Laplace Transforms, Schaum Outline Series.
Carl M. Bender and Steven A. Orsag, Advanced Mathematical Methods for Scientists and
Engineers: Asymptotic Methods and Perturbation Theory (Springer).
A. David Wunsch, Complex Variables with Applications, Second Edition (Addison-Wesley).
E. B. Sa and A. D. Snider, Fundamentals of Complex Analysis for Mathematics, Science
and Engineering (Prentice Hall).
Problem Sheets: There are six problem sheets for this course. These relate to the skills to
be acquired from the course. It is important to do most of the problems on these problem
sheets.
Subject Web Page: A web page will be maintained for this subject
(http://www.ms.unimelb.edu.au/s620332). Lecture notes and supplementary materi-
als will be available from this page. These les are password protected so you will need the
password announced in lectures.
0-3
Subject Description
This subject introduces methods of evaluating real integrals using complex analysis; and
develops methods for evaluating and inverting Fourier, Laplace and Mellin transforms, with
selected applications including summing series and computing asymptotic series.
Students should learn what an asymptotic expansion is and how it provides approxi-
mations; how to use Watsons lemma and the methods of Laplace, stationary phase and
steepest descents to evaluate asymptotic expressions; and how to nd asymptotic solutions
to ordinary dierential equations.
This subject demonstrates a range of important and useful techniques and their power
in solving problems in applied mathematics.
Complex analysis covers advanced applications of contour integration. Integral transforms
covers Fourier, Laplace and Mellin transforms; inversion by contour integration; convolution;
and applications.
Asymptotic expansions covers convergence and divergence; integrals with a large
parameter, Watsons Lemma, Laplaces method, steepest descent, stationary phase; and
WKB method for ordinary dierential equations.
0-4
620-121: Lecture Outline
Week 1. Complex Analysis
1. Analytic functions, Cauchy-Riemann equations
2. Cauchy-Goursat theorem, Cauchys integral theorem, Moreras theorem
3. Taylor and Laurent series, singularities, poles
Week 2. Contour Integrals and Residue Calculus I
4. Residues, residue theorem
5. Trigonometric integrals, improper integrals
6. Fourier type integrals
Week 3. Contour Integrals and Residue Calculus II
7. Limiting contours, Jordans lemma
8. Indented contours, Cauchy principal value integrals
9. Integrals with branch cuts
Week 4. Fourier Transforms I
10. Wave equation, Fourier transforms and series, sine and cosine transforms
11. Fouriers integral theorem, inverse Fourier transforms
12. Gaussians, Dirac delta
0-5
Week 5. Fourier Transforms II
13. Properties of Fourier transforms
14. Parsevals theorem, Fourier convolution theorem
15. Applications of Fourier transforms
Week 6. Laplace Transforms
16. Laplace transforms, inverse Laplace transforms
17. Properties of Laplace transforms, convolution, applications
18. MID-SEMESTER TEST
Week 7. Mellin Transforms
19. Mellin transforms, inverse Mellin transforms
20. Gamma and zeta functions
21. Applications of Mellin transforms
Week 8. Asymptotic Expansions
22. Landau symbols, divergent series
23. Asymptotic series
24. Watsons lemma
Week 9. Asymptotic Expansion of Integrals I
25. Laplaces method
26. Stirlings formula
27. Extensions of Laplaces method
0-6
Week 10. Asymptotic Expansion of Integrals II
28. Method of stationary phase
29. Method of steepest descents
30. Airys integral
Week 11. Dierential Equations and Asymptotics I
31. Method of dominant balance
32. Self adjoint form
33. Asymptotic expansion of solutions to ODEs
Week 12. Dierential Equations and Asymptotics II
34. WKB method
35. Matching and validity of WKB solutions
36. Applications
0-7
Week 1: Complex Analysis
1. Analytic functions, Cauchy-Riemann equations
2. Cauchy-Goursat theorem, Cauchys integral theorem, Moreras theorem
3. Taylor and Laurent series, singularities, poles
Brook Taylor (16851731) Colin Maclaurin (16981746)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
1
Analytic Functions
Denition: A complex function f is a rule that assigns to each z = x + iy in the domain
X C a unique image w = u +iv = f(z) in the codomain Y C
f : X Y ; w = f(z); X, Y C
A complex function is determined by the functions of two real variables u = u(x, y), v = v(x, y).
Denition: Suppose f(z) is dened in an open disk about z = z
0
. Then f(z) is dierentiable
at z
0
if the limit
df
dz
(z
0
) = f

(z
0
) = lim
z0
f(z
0
+z) f(z
0
)
z
exists. The function f(z) is analytic (regular or holomorphic) in an open region R if it has a
derivative at every point of R. The function f(z) is entire if it is analytic in C.
Example: The complex functions
e
z
, sinz, cos z
are entire functions since the derivatives exist everywhere in the complex plane C. Note that
w = f(z) = e
z
= e
x+iy
= e
x
cos y +ie
x
siny
so that
u = u(x, y) = e
x
cos y, v = v(x, y) = e
x
siny
1-1
Cauchy-Riemann Theorem
Suppose that f(z) = u +iv is analytic with
f(z) = f
_
z(x, y)
_
, z = z(x, y) = x +iy
Then by the chain rule
f
x
=
df
dz
_
z
x
_
y
=
df
dz
,
f
y
=
df
dz
_
z
y
_
x
=
df
dz
i
It follows that
df
dz
=
f
x
=
u
x
+i
v
x
= i
f
y
= i
u
y
+
v
y
u
x
=
v
y
,
v
x
=
u
y
Theorem 1 (Cauchy-Riemann Theorem)
Suppose f(z) = u(x, y) +iv(x, y) is dened in an open region R containing z
0
. If u(x, y) and
v(x, y) and their rst partial derivatives are continuous at z
0
(that is u(x, y) and v(x, y) are
C
1
at (x
0
, y
0
)) and satisfy the Cauchy-Riemann equations
u
x
=
v
y
,
v
x
=
u
y
at z
0
, then f(z) is dierentiable at z
0
. Consequently, if u(x, y) and v(x, y) are C
1
and satisfy
the Cauchy-Riemann equations at all points of R then f(z) is analytic in R. Moreover
f

(z) =
u
x
+i
v
x
= i
u
y
+
v
y
1-2
Contour Integrals
Denition: Let f(z) = u(x, y) + iv(x, y) be continuous in an open region containing the
smooth curve : z(t) = x(t) +iy(t) where t [a, b] and x(t), y(t) are C
1
. Then we dene the
contour integral of f(z) along by
_

f(z)dz :=
_
b
a
f(z(t))
dz
dt
dt =
_
b
a
(u +iv)(x

(t) +iy

(t))dt
=
_
b
a
_
u(x(t), y(t))x

(t) v(x(t), y(t))y

(t)
_
dt
+i
_
b
a
_
u(x(t), y(t))y

(t) +v(x(t), y(t))x

(t)
_
dt
The real Riemann integrals exist because the integrands are continuous. In terms of line
integrals in vector analysis
_

f(z)dz =
_

(u +iv)(dx +idy) =
_

udx vdy +i
_

vdx +udy
A contour integral over the piece-wise smooth curve =
n

j=1

j
is dened by
_

f(z)dz :=
n

j=1
_

j
f(z)dz
A contour integral over a simple closed contour in the positive (counter-clockwise) sense is
_

f(z)dz = closed contour integral


1-3
Greens Theorem
Theorem 2 (Greens Theorem) If R is a closed region of the xy plane bounded by a simple
(non-crossing) closed curve and if M and N are C
1
in R then
_
=R
Mdx +Ndy =
_
R
_
_
N
x

M
y
_
dxdy
1 2 3 4 5 6
1
2
3
4
1 2 3 4 5 6
1
2
3
4
1-4
Proof of Greens Theorem
Proof: Assume rst that R is convex (so that any
straight lines parallel to coordinate axes cut R in at
most two points) with lower (AEB), upper (AFB), left
(EAF) and right (EBF) curves y = Y
1
(x), y = Y
2
(x),
x = X
1
(y) and x = X
2
(y).
1 2 3 4 5 6
1
2
3
4
1 2 3 4 5 6
1
2
3
4
_

M dx =
_
b
a
M(x, Y
1
(x))dx +
_
a
b
M(x, Y
2
(x))dx
=
_
b
a
[M(x, Y
2
)M(x, Y
1
)] dx
=
_
b
a
_
_
y=Y
2
(x)
y=Y
1
(x)
M(x, y)
y
dy
_
dx =
_
R
_
M
y
dxdy
_

N dy =
_
f
e
N(X
2
, y)dy +
_
e
f
N(X
1
, y)dy
=
_
f
e
_
_
x=X
2
(y)
x=X
1
(y)
N(x, y)
x
dx
_
dy =
_
R
_
N
x
dxdy
Adding gives the required result. If R is not convex (or is multi-connected), we can subdivide
R into two or more convex regions by cuts and use additivity.
1-5
Cauchys Theorem
Theorem 3 (Cauchys Theorem) If f(z) is analytic in a simply-connected open domain D
and f

(z) is continuous in D then for any simple closed curve in D


_

f(z)dz = 0
Proof: Let R be the union of and its interior so that = R. The result then follows from
Greens theorem (which requires simple and f

(z) continuous)
_

f(z)dz =
_

(u+iv)(dx+idy) =
_

udxvdy +i
_

vdx+udy
=
_
R
_
_

v
x

u
y
_
dxdy +i
_
R
_
_
u
x

v
y
_
dxdy = 0
since by Cauchy-Riemann equations
u
x
=
v
y
,
v
x
=
u
y

Theorem 4 (Cauchy-Goursat Theorem)


If f(z) is analytic in an open domain D then for any closed contour in D
_

f(z)dz = 0
Proof: Dicult see text. Removes requirements that is simple and f

(z) is continuous in
R. Note = R must be positively oriented so that R is always on the left. Also the region R
need not be simply-connected provided all of the boundary of R is included in = R.
1-6
Deformation of Contours
Corollary 5 (Deformation of Contours) (i) If
1
,
2
are two paths from z
1
to z
2
and f(z)
is analytic in a simply-connected open region D containing
1
,
2
and the region in between
_

1
f(z)dz =
_

2
f(z)dz
is independent of the path in D. (ii) This result also obtains for closed paths by introducing
cuts. These results are called deforming a contour through a region of analyticity.
Proof: (i) Suppose D is simply-connected so that =
1

2
is a closed contour in D. Then
path independence follows from Cauchys theorem
0 =
_

f(z)dz =
_

2
f(z)dz =
_

1
f(z)dz
_

2
f(z)dz
(ii) Suppose
1
encloses
2
, then we can introduce a cut to make the region R in between
simply-connected. Then including all of the boundary of R and keeping R on the left gives
0 =
_
R
f(z)dz =
_

1
+
_

f(z)dz =
_

1
f(z)dz
_

2
f(z)dz

R
1-7
Cauchy Integral Formula
Theorem 6 (Cauchy Integral Formula) Let be a positively oriented simple closed con-
tour. If f(z) is analytic in a simply-connected open domain D containing then
f(a) =
1
2i
_

f(z)
z a
dz, a inside
a

1
R
Proof: The integrand is analytic in D\{a}. We introduce a small circle of radius > 0
centered on z = a and a cut
1
so that the region R between and is simply-connected.
We parametrize by z a = e
it
with z

(t) = ie
it
. By deforming the contour we nd
_

f(z)
z a
dz =
_

f(z)
z a
dz
Since the LHS is independent of we can take 0
_

f(z)
z a
dz = lim
0
_

f(z)
z a
dz = i lim
0
_
2
0
f(a + e
it
)dt
= i
_
2
0
lim
0
f(a + e
it
)dt = i
_
2
0
f(a)dt = 2if(a)
where we have used the continuity of f(z).
1-8
General Cauchy Integral Formula
Theorem 7 (General Cauchy Integral Formula) Let be a positively oriented simple
closed contour. If f(z) is analytic in a simply-connected open domain D containing then
f
(n)
(a) =
n!
2i
_

f(z)
(z a)
n+1
dz, a inside , n = 0, 1, 2, . . .
Proof: See text. This result is equivalent to
d
n
da
n
f(a) =
d
n
da
n
_
1
2i
_

f(z)
z a
dz
_
=
1
2i
_

n
a
n
f(z)
z a
dz
Corollary 8 (Analyticity of Derivatives)
If f(z) is analytic in an simply-connected open domain D, then f

(z), f

(z), . . . , f
(n)
(z), . . . are
all analytic in D.
Proof: Follows from the general Cauchy integral formula since the derivatives are given by
f
(n)
(z) =
n!
2i
_

f(w)
(w z)
n+1
dw
for a suitably chosen closed contour .
1-9
Moreras Theorem
Theorem 9 (Moreras Theorem) If a function f(z) is continuous in a simply-connected
open domain D and if for every closed contour in D
_

f(z) dz = 0
then f(z) is analytic in D.
This is essentially the converse of Cauchys theorem.
Proof: We have
_

f(w) dw = 0 for all in D. Using deformation of contours we deduce that


F(z) =
_
z
z
0
f(w) dw
evaluated for any curve joining two arbitrary points z
0
and z in D, is a well dened analytic
function which implies from the previous Corollary that f(z) = F

(z) is also analytic in D.

1-10
Taylors Theorem
Theorem 10 (Taylors Theorem)
If f(z) is analytic in the disk |z a| < R then the Taylor series converges to f(z) for all z in
this disk. The convergence is uniform in any closed subdisk |z a| R

< R. Specically,
f(z) =
n

k=0
f
(k)
(a)
k!
(z a)
k
+R
n
(z)
where the remainder satises
sup
|za|R

|R
n
(z)| 0 as n
a
z
w
C
R
R

1-11
Proof of Taylors Theorem
Proof: Let C be the circle |z a| = R

with R

< R

< R. Then by Cauchys integral formula


with n = 0, 1, 2, . . . for z inside C
f(z) =
1
2i
_
C
f(w)
w z
dw,
f
(n)
(a)
n!
=
1
2i
_
C
f(w)
(w a)
n+1
dw
If we substitute
1
w z
=
1
(w a) (z a)
=
1
w a
1
1
za
wa
=
1
w a
_

_
1 +
z a
w a
+
(z a)
2
(w a)
2
+ +
(z a)
n
(w a)
n
+
(za)
n+1
(wa)
n+1
1
za
wa
_

_
into the rst formula, integrate term-by-term and use the second formula we nd
f(z) =
n

k=0
f
(k)
(z)
k!
(z a)
k
+R
n
(z)
where the remainder R
n
(z) is uniformly bounded on |z a| R

by
|R
n
(z)| =
1
2

_
C
f(w)
(w z)
(z a)
n+1
(w a)
n+1
dw

2R

2(R

)
_
R

_
n+1
max
wC
|f(w)| 0 as n
1-12
Power Series
Denition: A series of the form

n=0
a
n
(z) =

n=0
a
n
(z a)
n
with coecients a
n
C is called a power series around z = a.
Theorem 11 (Radius of Convergence)
A power series has a radius of convergence R such that either:
(i) The series converges only at a and R = 0.
(ii) The series converges absolutely on |z a| < R and diverges if |z a| > R > 0.
(iii) The series converges for all z and R = .
Proof: See text.
Theorem 12 (Analytic Power Series)
A power series converges to an analytic function inside its circle of convergence |z a| = R.
Proof: See text.
A power series may converge at some, all or no points on the circle of convergence
|za| = R. The largest disk on which a power series converges is called its disk of convergence.
This is either an open or closed disk of the form |z a| < R or |z a| R.
1-13
Laurent Series
Denition: A series of the form

n=
a
n
(z a)
n
=

n=0
a
n
(z a)
n
+

n=1
a
n
(z a)
n
convergent in some open annulus r < |z a| < R is called a Laurent series around z = a.
Theorem 13 (Laurent Theorem)
(i) Suppose f(z) is analytic in an open annulus r < |z a| < R and let C be any circle with
center at a lying in this annulus. Then the Laurent series with coecients
a
n
=
1
2i
_
C
f(w)
(w a)
n+1
dw, n = 0, 1, 2, . . . , a is inside C
converges uniformly to f(z) in any closed subannulus r <
1
|z a|
2
< R.
(ii) Conversely, if r < R and

n=0
a
n
(z a)
n
converges for |z a| < R,

n=1
a
n
(z a)
n
converges for |z a| > r
then the Laurent series denes a unique analytic function f(z) in r < |z a| < R.
Proof: Similar to Taylors theorem see text.
If f(z) is analytic in |z a| < R then by Cauchys theorem a
n
= 0 for n 1 and the
series reduces to the Taylor series. In practice, the coecients are typically obtained by
manipulating Taylor expansions about z a = 0 and z a = .
1-14
Isolated Singularities
Denition: Let f(z) have an isolated singularity at z = a and Laurent expansion
f(z) =

n=
a
n
(z a)
n
, 0 < |z a| < R
(i) If a
n
= 0 for all n < 0 then we say z = a is a removable singularity.
(ii) If a
m
= 0 for some m > 0 but a
n
= 0 for all n < m, we say z = a is a pole of order m.
(iii) If a
n
= 0 for an innite number of n > 0, we say z = a is an essential singularity.
Examples: From Laurent expansions we nd
(i) f(z) = sinz/z has a removable singularity at z = 0.
(ii) f(z) = e
z
/z
3
has a pole of order 3 at z = 0.
(iii) f(z) = e
1/z
has an essential singularity at z = 0: e
1/z
= 1 +
1
z
+
1
2! z
2
+
1
3! z
3
+
Theorem 14 (Isolated Singularities)
If f(z) has an isolated singularity at z = a then
(i) z = a is a removable singularity |f(z)| is bounded near z = a f(z) has a limit as
z a f(z) can be redened at z = a so that f(z) is analytic at z = a.
(ii) z = a is a pole |f(z)| as z a f(z) = g(z)(z a)
m
with m > 0 and g(a) = 0.
(iii) z = a is an essential singularity |f(z)| is neither bounded nor goes to innty as z a
f(z) assumes every complex value, with possibly one exception, in every neighbourhood
of z = a.
Proof: See text (i) is easy, (ii) is similar to previous theorem, (iii) is very hard.
1-15
Week 2: Residue Calculus I
4. Residues, residue theorem
5. Trigonometric integrals, improper integrals
6. Fourier type integrals
Augustin Louis Cauchy (17891857) Edouard Jean-Baptiste Goursat (18581936)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
2
Residues
Denition: If f(z) has an isolated singularity at the point z = a, so it is analytic in a punctured
neighbourhood of a, then the coecient a
1
of (z a)
1
in the Laurent series for f(z) around
a is called the residue of f(z) at a and denoted Res(f; a)
f(z) =

n=
a
n
(z a)
n
Res(a) = Res(f; a) = a
1
Lemma 15 (Order m Pole)
If f(z) has a pole of order m at z = a then
Res(f; a) = lim
za
1
(m1)!
d
m1
dz
m1
[(z a)
m
f(z)]
If f(z) has a simple pole (m = 1) at z = a then
Res(f; a) = lim
za
(z a)f(z)
Proof: Starting with the Laurent expansion
f(z) =
a
m
(z a)
m
+ +
a
2
(z a)
2
+
a
1
(z a)
+a
0
+a
1
(z a) +

d
m1
dz
m1
[(z a)
m
f(z)] = (m1)! a
1
+m! a
0
(z a) +
(m1)! a
1
as z a
2-1
Example of Residues
Example: Find the residues of f(z) =
z
2
(1 z)
2
(2 z)
:
Solution: There is a simple pole at z = 2 and a double pole at z = 1
Res(2) = lim
z2
(z 2)
z
2
(1 z)
2
(2 z)
= lim
z2
z
2
(1 z)
2
= 4
Res(1) = lim
z1
d
dz
_
(z 1)
2
z
2
(1 z)
2
(2 z)
_
= lim
z1
d
dz
_
z
2
(2 z)
_
= lim
z1
_
2z
2 z
+
z
2
(2 z)
2
_
= 2 +1 = 3
Exercise: If f(z) = P(z)/Q(z) is rational and the polynomial Q(z) has a simple zero at z = a,
use lH opitals rule to show that
Res(f; a) =
P(a)
Q

(a)
2-2
Residue Theorem
Theorem 16 (Residue Theorem)
If C is a simple closed contour and f(z) is analytic inside and on C except at the isolated
points z
1
, z
2
, . . . , z
n
inside C then
_
C
f(z)dz = 2i
n

k=1
Res(f; z
k
)
Proof: Let C
k
be small circles about each isolated singu-
larity z = z
k
so that on C
k
we have the Laurent expansion
f(z) =

m=
a
(k)
m
(z z
k
)
m
Then by deformation of contours
_

f(z)dz =
n

k=1
_
C
k
f(z)dz =
n

k=1
_
C
k

m=
a
(k)
m
(z z
k
)
m
dz
=
n

k=1

m=
a
(k)
m
_
C
k
(z z
k
)
m
dz = 2i
n

k=1
a
(k)
1
= 2i
n

k=1
Res(f; z
k
)
Term-by-term integration is justied by the uniform con-
vergence of the Laurent expansions.

z
C

z
C
C

z
3
3
2
2
1
1
C
2-3
Residues and Trigonometric Integrals
Residue theory can be used to evaluate many types of integrals. For example,
I =
_
2
0
U(cos t, sint)dt
where U(x, y) is a continuous real rational function of x, y on [1, 1] [1, 1].
Example: Evaluate I =
_

0
dt
2 cos t
=
1
2
_
2
0
dt
2 cos t
Solution: Let z = e
it
so that dz = ie
it
dt = izdt and dt = dz/iz. Then cos t =
1
2
(z +z
1
) and
2I =
_
|z|=1
1
2
1
2
(z +z
1
)
dz
iz
=
2
i
_
|z|=1
dz
z
2
4z +1
The integrand has simple poles at z

= 2

3 but only z

lies inside the unit circle with


residue
Res(z

) = lim
zz

(z z

)
(z z

)(z z
+
)
= lim
zz

1
(z z
+
)
=
1
2

3
Hence
2I =
2
i
2i
_

1
2

3
_
=
2

3
I =

2-4
Another Trigonometric Integral
Exercise: For real p = 1, evaluate the real denite trigonometric integral
I =
_
2
0
dt
1 2p cos t +p
2
Solution: Let z = e
it
so that dz = ie
it
dt = izdt and dt = dz/iz. Then cos t =
1
2
(z +z
1
) and
I =
_
|z|=1
dz
iz[1 +p
2
p(z +
1
z
)]
=
i
p
_
|z|=1
dz
(z p)(z
1
p
)
There are now two situations to consider:
(i) |p| > 1: In this case there is a simple pole inside |z| = 1 at z = 1/p. So by the residue
theorem
I =
i
p
2i Res
_
1
p
_
=
2
p
1
1
p
p
=
2
p
2
1
(ii) |p| < 1: In this case there is a simple pole at z = p so
I =
i
p
2i Res(p) =
2
p
1
p
1
p
=
2
1 p
2
The above two results can be combined into the single formula
_
2
0
dt
1 2p cos t +p
2
=
2
|1 p
2
|
when p = 1
2-5
Improper Integrals
Denition: If f(x) is continuous over [a, ), its improper integral is dened by
_

a
f(x)dx := lim
R
_
R
a
f(x)dx
provided this limit exists. If f(x) is continuous on (, ) we dene the double improper
integral
_

f(x)dx := lim
R
_
R
0
f(x)dx + lim
R

_
0
R

f(x)dx
provided both limits exist. The (Cauchy) principle value of the integral is dened by
PV
_

f(x)dx := lim
R
_
R
R
f(x)dx
provided this limit exists.
If the double improper integral exists it must equal its principal value, but the principal
value integral can exist when the double integral does not exist.
2-6
Improper Integrals and Residues
Lemma 17 Let C
R
be the semi-circular contour in the upper-half plane from z = R to
z = R. If
|f(z)|
K
|z|
2
, |z| large
then
lim
R

_
C
R
f(z)dz

= 0
Proof: We bound the integral

_
C
R
f(z)dz

_
C
R
|f(z)| |dz|
K
R
2
Length(C
R
) =
K
R
2
R =
K
R
0 as R
C
R
z
1
2-7
Principal Value Integrals
Theorem 18 (Principal Value Integrals)
Let f(z) = P(z)/Q(z) be rational and analytic on the real axis so it is analytic in the upper
half plane except at isolated poles. If in addition
degree Q 2 +degree P
so that f(z) satises the previous lemma, then residue theory can be used to evaluate the
principal value integral
PV
_

f(x)dx = lim
R
_
R
R
f(x)dx = lim
R
_

R
f(z)dz
= 2i
n

k=1
Res(f; z
k
)
by closing the contour in the upper half plane and summing over residues.
C
R
z
1
2-8
A Principal Value Integral
Example: Use the previous theorem to compute the principal value integral
I = PV
_

x
2
dx
(x
2
+1)
2
Solution: The integrand f(z) = P(z)/Q(z) is rational with degree P = 2, degree Q = 4 and
double poles at z = i. The previous theorem therefore applies. The residue at z = i is
Res(f; i) = lim
zi
d
dz
[(z i)
2
f(z)] = lim
zi
d
dz
_
z
2
(z +i)
2
_
= lim
zi
_
2z
(z +i)
2

2z
2
(z +i)
3
_
=
2i
4i
2
+
2
8i
3
=
1
4i
Therefore
_

R
f(z)dz = 2i
1
4i
=

2
for all R > 1
and so
_

R
f(z)dz =
_
R
R
f(x)dx +
_
C
R
f(z)dz =

2
It follows that
lim
R
_
R
R
f(x)dx =

2
lim
R
_
C
R
f(z)dz =

2
This last result follows by the previous lemma. Specically, for |z| large, we have using the
triangle inequality |z
2
+1|

|z|
2
1


1
2
|z|
2
and so
|f(z)| =

z
2
(z
2
+1)
2

=
|z|
2
|z
2
+1|
2

|z|
2
(
1
2
|z|
2
)
2

4
|z|
2

Note that in these cases the double improper integral exists so that
PV
_

x
2
dx
(x
2
+1)
2
=
_

x
2
dx
(x
2
+1)
2
= 2
_

0
x
2
dx
(x
2
+1)
2
2-9
Fourier Integrals
Fourier integrals or transforms are integrals of the form
I =
_

e
ikx
f(x) dx, k R
Theorem 19 (Fourier Integrals) Let C
R
be the semi-circle of radius R in the upper half
plane for k > 0 and lower half plane for k < 0. Then
PV
_

e
ikx
f(x)dx = lim
R
_
R
R
e
ikx
f(x)dx = sgn(k) 2i
n

j=1
Res
_
e
ikz
f(z); z
j
_
holds provided f(z) is such that
lim
R
_
C
R
e
ikz
f(z) dz = 0 ()
Proof: (i) k > 0: On C
R
we have z = Re
it
, 0 t so that
e
ikz
= exp(ikRe
it
) = exp[ikR(cos t +i sint)] = exp(kRsint) exp(ikRcos t)
Hence this is exponentially small for large R, k > 0 and 0 < t < by the inequality
|e
ikz
| = exp(kRsint) =
_
e
k sint
_
R
where e
k sint
< 1
Hence () holds provided f(z) does not grow exponentially.
(ii) k < 0: The inequality holds for k < 0 provided < t < 2. The contour is closed in the
lower half plane and the sum is now over residues of poles in the lower half plane. In the
lower half plane the contour integral is clockwise which accounts for the sign in sgn(k).
2-10
Fourier Example
Exercise: Evaluate the Fourier integral
I =
_

e
ikx
(a
2
+x
2
)
1
dx, a > 0, k R
Solution: The function f(z) = (a
2
+z
2
)
1
has simple poles at z = ia.
(i) k 0: Close the contour in the upper half plane. Provided R > a, we have

_
C
R
e
ikz
f(z) dz

_
C
R
|(a
2
+z
2
)
1
| |dz| (R
2
a
2
)
1
_
C
R
|dz|
= R(R
2
a
2
)
1
0 as R
It follows that
_

e
ikx
(a
2
+x
2
)
1
dx = 2i Res(e
ikz
f(z); ia) = 2i
e
ikz
z +ia

z=ia
=

a
e
ka
(ii) k 0: Closing the contour in the lower half plane gives
_

e
ikx
(a
2
+x
2
)
1
dx = 2i Res(e
ikz
f(z); ia) = 2i
e
ikz
z ia

z=ia
=

a
e
ka
(iii) The two results combine into the single result
_

e
ikx
(a
2
+x
2
)
1
dx =

a
e
|k|a

Notice that taking real and imaginary parts gives


_

cos(kx)
a
2
+x
2
dx =

a
e
|k|a
,
_

sin(kx)
a
2
+x
2
dx = 0
2-11
Week 3: Residue Calculus II
7. Limiting contours, Jordans lemma
8. Indented contours, Cauchy principal value integrals
9. Integrals with branch cuts
Marie Ennemonde Camille Jordan (18381922) Augustin Louis Cauchy (17891857)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
3
Uniformity on an Arc
Denition: (Uniformity on an Arc) If along a circular arc of radius R, |f(z)| M
R
where
M
R
does not depend on (the polar angle) , and M
R
0 as R (or R 0) we say that
f(z) tends uniformly to zero on C
R
= {z : |z| = R} as R (or R 0).
Example: Consider the function
f(z) =
z
z
2
+1
on C
R
= {z : |z| = R}
We deduce that
|f(z)|
_

_
R
R
2
1
, R > 1
R
1 R
2
, R < 1
It follows that f(z) tends uniformly to zero when either R or R 0.
In general any rational function whose denominator is of higher degree than the numerator
tends uniformly to zero as R .
The polar angle can be restricted to a closed interval
0

0
+. We then require
M
R
= max
z=Re
i

0
+
|f(z)| 0 as R
3-1
Limiting Contours I
Theorem 20 (Limiting Contours I) If on C
R
, zf(z) tends uniformly to zero as R then
lim
R
_
C
R
f(z) dz = 0
on the circular arc of radius R subtending an angle at the origin
C
R
= {z : |z| = R,
0

0
+}
Proof: We have
|zf(z)| = R|f(z)| M
R
where M
R
is independent of and M
R
0 as R . It follows that
0

_
C
R
f(z) dz

_
C
R
|f(z)| |dz|
M
R
R
_
C
R
|dz| = M
R
0 as R
since on C
R
, z = Re
i
, dz = Rie
i
d, and hence
_
C
R
|dz| =
_

0
+

0
Rd = R
3-2
Limiting Contours II Jordans Lemma
Theorem 21 (Jordans Lemma) If f(z) tends uniformly to zero on C
R
= {z : z = Re
i
,
0
0

1
} as R then for k > 0:
lim
R
_
C
R
e
ikz
f(z) dz = 0 C
R
in 1st, 2nd quadrants
lim
R
_
C
R
e
ikz
f(z) dz = 0 C
R
in 3rd, 4th quadrants
lim
R
_
C
R
e
kz
f(z) dz = 0 C
R
in 2nd, 3rd quadrants
lim
R
_
C
R
e
kz
f(z) dz = 0 C
R
in 1st, 4th quadrants
Proof: We prove the rst case. The other cases are similar. Note that on C
R
|dz| = Rd and |f(z)| M
R
It follows that
0

_
C
R
e
ikz
f(z) dz

_
C
R
|e
ikz
||f(z)||dz| RM
R
_

1

0
e
kRsin
d
RM
R
_

0
e
kRsin
d = 2RM
R
_
/2
0
e
kRsin
d
2RM
R
_
/2
0
e
2kR/
d =
M
R
k
(1 e
kR
) 0 as R
Here we used sin(

2
) = sin(

2
+) and the inequality sin 2/ for 0 /2.
3-3
Limiting Contours III
Theorem 22 (Limiting Contours III) If on a circular arc C
r
of radius r and centre a,
|(z a)f(z)| tends uniformly to zero as r 0 then
lim
r0
_
C
r
f(z) dz = 0
Proof: On C
r
= {z : z a = re
i
,
0
<
0
+} we have
|(z a)f(z)| = r|f(z)| M
r
where M
r
0 as r 0. It follows that

_
C
r
f(z) dz

M
r
r
_
C
r
|dz| = M
r
which gives the required result.
3-4
Limiting Contours IV
Theorem 23 (Limiting Contours IV) If f(z) has a simple pole at z = a with residue Res(a)
and if C
r
is a circular arc of radius r and centre a subtending an angle at z = a then
lim
r0
_
C
r
f(z) dz = iRes(a)
Proof: The Laurent series for f(z) can be expressed as
f(z) =
Res(a)
z a
+(z)
where (z) is analytic at z = a. We then have
_
C
r
f(z) dz =
_
C
r
Res(a)
z a
dz +
_
C
r
(z) dz
The second integral vanishes as r 0 from previous theorem (since (z) is bounded at
z = a). Furthermore, on C
r
, we can write
z = a +re
i
where
0

0
+
It then follows that
_
C
r
Res(a)
z a
dz = Res(a)
_

0
+

0
ire
i
re
i
d = iRes(a)
3-5
Indented Contours I
Example: Consider the contour integrals
I =
_

sinx
x
dx =
_

Im
_
e
ix
x
_
dx, I

=
_
C
e
iz
z
dz
where C is the standard upper-half-plane contour indented by a small semi-circle C
r
around
the singularity at z = 0. Since f(z) = e
iz
/z is analytic inside C, Cauchys theorem gives
0 =
_
C
e
iz
z
dz =
_
r
R
e
ix
x
dx +
_
C
r
e
iz
z
dz +
_
R
r
e
ix
x
dx +
_
C
R
e
iz
z
dz
By Jordans Lemma lim
R
_
C
R
e
iz
z
dz = 0
Similarly, from the previous theorem
lim
r0
_
C
r
e
iz
z
dz = i Res(0) = i

R R
C
C
R
-r r
r
where the negative sign comes from the fact that C
r
is traversed clockwise. It follows that
lim
r0, R
_
_
r
R
e
ix
x
dx +
_
R
r
e
ix
x
dx
_

_

e
ix
x
dx = i
where the limit denes the Cauchy principal value. Equating real and imaginary parts gives

cos x
x
dx = 0,
_

sinx
x
dx
_

sinx
x
dx =
The integral I is a conditionally (not absolutely) convergent improper integral.
3-6
Indented Contours II
Example: Evaluate the contour integral
I =
_
C
e
iz
dz

2
4z
2
where C is the standard upper-half-plane contour indented by a small semi-circles around the
singularities at z = /2.
x x

2
2
Solution: Following essentially the same steps as in the previous example
0 =
_

e
ix
dx

2
4x
2
i
_
Res
_

2
_
+Res
_

2
__
Equating real parts gives

cos x

2
4x
2
dx =
1
2
The Cauchy principal value must be retained since the improper integral fails to exist due to
divergences at x = /2.
3-7
Branch Points I
Example: Evaluate the integral
_

0
x
m1
cos xdx, 0 < m < 1
Solution: To evaluate this integral we consider the integral of z
m1
e
iz
around a contour C
enclosing the rst quadrant and indented at the branch point z = 0. The branch cut is along
the negative real axis. We use the principal branch of z
m1
so that z
m1
= x
m1
when z = x
is real and positive. By Cauchys theorem
0 =
_
C
z
m1
e
iz
dz =
_
R
r
x
m1
e
ix
dx +
_
C
R
z
m1
e
iz
dz +
_
r
R
(iy)
m1
e
y
idy +
_
C
r
z
m1
e
iz
dz
The integral along C
R
vanishes as R by Jordans lemma. The integral along C
r
vanishes
as r 0 by Limiting Contour III. It follows that
_

0
x
m1
e
ix
dx = i
m
(m), (m) =
_

0
y
m1
e
y
dy
where (m) is the gamma function. On the principal
branch
i
m
= (e
i/2
)
m
= cos
_
m
2
_
+i sin
_
m
2
_
branch cut
iR
C
R
R
C
ir
r
r
Hence equating real and imaginary parts gives
_

0
x
m1
cos xdx = cos(
m
2
) (m),
_

0
x
m1
sinx dx = sin(
m
2
) (m)
3-8
Branch Points II
Example: Consider
_

0
x
m1
1 +x
dx, 0 < m < 1
C A
D
B
C
C
branch cut
R
r
Solution: Integrate f(z) =
z
m1
1 +z
around the contour as shown. By residues
_
C
z
m1
1 +z
dz = 2i Res(1) = 2i (e
i
)
m1
= 2i e
im
We use the branch with branch cut along the positive real axis so that
z
m1

_
_
_
x
m1
, above real axis
(xe
2i
)
m1
, below real axis
3-9
It follows that
_
C
z
m1
(1 +z)
1
dz =
_
R
r
x
m1
(1 +x)
1
dx +
_
C
R
z
m1
(1 +z)
1
dz
+
_
r
R
x
m1
e
2im
(1 +x)
1
dx +
_
C
r
z
m1
(1 +z)
1
dz = 2i e
im
Now on C
R
for R > 1 and m < 1
|zf(z)|
R
m
R 1
0 as R
The integral around C
R
therefore vanishes in the limit R from Limiting Contour I.
Similarly on C
r
for r < 1 and m > 0
|zf(z)|
r
m
1 r
0 as r 0
so as r 0 the integral around C
r
vanishes by Limiting Contour III. It then follows that
(1 e
2im
)
_

0
x
m1
1 +x
dx = 2i e
im
from which one obtains
_

0
x
m1
1 +x
dx =

sinm
, 0 < m < 1
3-10
Branch Points III
Theorem 24 (Branch Points III) Let f(z) be meromorphic so it does not involve loga-
rithms and let C be the previous contour. Then
_

0
f(x) dx =

k
Res
_
f(z) logz; z
k
_
Proof: Evaluate
_
C
f(z) logz dz
Note that on the chosen branch
logz
_
_
_
logx, above real axis
logx +2i, below real axis
Assuming the integrals over C
R
and C
r
vanish in the limits R and r 0 we have
lim
R
r0
_
C
f(z) logz dz = lim
R
r0
_ _
R
r
f(x) logxdx
_
R
r
f(x)(logx +2i)dx
_
= 2i
_

0
f(x) dx = 2i

k
Res
_
f(z) logz; z
k
_
Here the logarithms cancel and the sum is over all residues in the plane.
An example is f(z) = z(1 + z)
3
for which the conditions of Limiting Contours I and III
are met for f(z) logz and hence
_

0
x(1 +x)
3
dx = Res
_
f(z) logz; 1
_
=
1
2!
d
2
dz
2
(z logz)

z=1
=
1
2
since f(z) logz has a pole of order 3 at z = 1.
3-11
Logarithmic Integrals
Exercise: Use contour integrals and residue calculus to show that
_

0
(logx)
2
x
2
+1
dx =

3
8
,
_

0
logx
x
2
+1
dx = 0
Take f(z) =
(logz)
2
z
2
+1
and C to be the standard upper-half-plane contour indented by a small
semi-circle C
r
around the singularity at z = 0. Explain carefully where you place the branch
cut of the logarithm.

R R
C
C
R
-r r
r
Warning: The previous theorem cannot be used since f(z) =
(logz)
2
z
2
+1
is not meromorphic.
3-12
Logarithmic Integrals Solution
Solution: Consider the contour integral of f(z) around the given contour C
_
C
(logz)
2
z
2
+1
dz
The integrand is analytic inside C except for a simple pole at z = i with residue Res(i) =
2
i/8.
Choose the branch of the logarithm so that the branch cut is along the positive x-axis. Take
the integral from r to R to be just above this cut in the rst quadrant. Hence
_
C
(logz)
2
z
2
+1
dz =
_
R
r
(logx)
2
x
2
+1
dx +
_
R
r
(logx +i)
2
x
2
+1
dx +
_
C
R
+
_
C
r
(logz)
2
z
2
+1
dz = 2i
_

2
i
8
_
=

3
4
Now let r 0, R . Since the integrals around C
r
and C
R
vanish, we have
_

0
(logx)
2
x
2
+1
dx +
_

0
(logx +i)
2
x
2
+1
dx =

3
4
that is
2
_

0
(logx)
2
x
2
+1
dx +2i
_

0
logx
x
2
+1
dx
2
_

0
dx
x
2
+1
=

3
4
The results follow by taking real and imaginary parts after using
_

0
dx
x
2
+1
=
_
arctanx
_

0
=

2

In general, it is better to integrate f(z) logz around the contour C of the previous theorem.
3-13
Contour Integral Summary
Integral Method (complete this column yourself)
_
2
0
F(cos , sin) d
_

f(x) dx, f(x) rational


_

e
ikx
f(x) dx

sinx
x
dx

cos x

2
4x
2
dx
_

0
x
m1
cos xdx
_

0
x
m1
1 +x
dx
_

0
f(x) dx, f(x) = f(x)
3-14
Week 4: Fourier Transforms I
10. Wave equation, Fourier transforms and series, sine and cosine transforms
11. Fouriers integral theorem, inverse Fourier transforms
12. Gaussians, Dirac delta
Fourier (18381922) Dirac (18581936)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
4
Wave Equation
The one-dimensional wave equation is the second order partial dierential equation (PDE)

2
u
x
2
=
1
c
2

2
u
t
2
The general solution is u(x, t) = F(x ct) +G(x +ct) where F, G are arbitrary functions.
Alternatively, using separation of variables and setting
u(x, t) = X(x)T(t)
yields the second order Ordinary Dierential Equations (ODEs)
1
X
d
2
X
dx
2
=
1
c
2
T
d
2
T
dt
2
= const = k
2
, k C
with solutions
X(x) = e
ikx
, T(t) = e
ikct
A particular solution is the travelling wave
u
k
(x, t) = A(k) exp[ik(x +ct)]
where A is an arbitrary function of k R. Since the wave equation is linear, the principle of
superposition implies that the Fourier integral or Fourier transform
u(x, t) =
_

A(k)e
ik(x+ct)
dk
is also a solution. This is checked by direct substitution and dierentiating under the integral.
4-1
Vibrating String
A vibrating string with xed ends at x = 0, L satises the wave equation with the boundary
and initial conditions
u(0, t) = u(L, t) = 0; u(x, 0) = r(x),
u
t

t=0
= v(x)
Putting this in the general solution for X
X(x) = Asin(kx) +Bcos(kx)
gives B = 0, k = n/L with n = 1, 2, 3, . . . and
T(t) = A
n
cos
_
nct
L
_
+B
n
sin
_
nct
L
_
From the principle of superposition the general solution is
u(x, t) =

n=1
_
A
n
cos
_
nct
L
_
+B
n
sin
_
nct
L
__
sin
_
nx
L
_
The initial displacement and velocity gives
r(x) =

n=1
A
n
sin
_
nx
L
_
, v(x) =

n=1
(
nc
L
_
B
n
sin
_
nx
L
_
The problem is to invert the series to nd the coecients A
n
and B
n
.
These series are special cases of the complex Fourier series
f(x) =

n=
a
n
e
inx/L
which in turn is a discrete form of the general Fourier integral.
4-2
Inverting Fourier Series
To invert Fourier series multiply both sides by exp(imx/L) and integrate with respect to x
from L to L. Assuming the integration and summation can be interchanged and integrating
the series term-by-term gives
_
L
L
f(x)e
imx/L
dx =

n=
a
n
_
L
L
e
i(mn)x/L
dx ()
Now for m = n
_
L
L
e
i(mn)x/L
dx =
_
Le
i(mn)x/L
i(mn)
_
x=L
x=L
= 0
whereas when m = n the integral is trivially equal to 2L. Hence
1
2L
_
L
L
e
i(mn)x/L
dx =
m,n
where the Kronecker delta is

m,n
=
_
_
_
1, m = n
0, m = n
It follows that all the terms in the sum on the right hand side of () vanish except the
term n = m. We thus obtain the inversion formula
a
n
=
1
2L
_
L
L
f(x)e
inx/L
dx
We will see that a similar result holds for the Fourier transform.
4-3
Fourier Transform
To obtain Fourier transforms we start with discrete Fourier series and take the continuum
limit L . Dene the function F(k) at the discrete points k = n/L by
F(k) = F
_
n
L
_
=

La
n
, a
n
=
1
2L
_
L
L
f(x)e
inx/L
dx
Holding k xed and letting n and L gives
F(k) =
1

2
_
L
L
f(x)e
ikx
dx
1

2
_

f(x)e
ikx
dx, 0 k
We call the right hand side the Fourier transform of f and use the notation
F{f} F(k) =
1

2
_

f(x)e
ikx
dx
The Fourier transform F acts on functions not variables. This is called a functional and
is the reason the argument is enclosed in braces.
There is no known necessary and sucient condition for the existence of the Fourier
transform. A sucient but not necessary condition is that f(x) is C
1
and absolutely integrable,
that is,
_

|f(x)| dx <
4-4
Inverse Fourier Transform
To invert the Fourier transform and obtain f in terms of F we start with the discrete Fourier
series
f(x) =

n=
a
n
e
inx/L
Substituting for a
n
we have
f(x) =

n=
1
L
_

2
F
_
n
L
_
e
inx/L
=
1

n=
F
_
n
L
_
e
i(n/L)x
The right hand side is now in the form of a Riemann sum
S
L
=

L

n=
g(
n
L
)
_

g(k) dk as L
which approximates the area under y = g(k) by the sum of the areas of the rectangles under
the curve. Using this we deduce the required inverse Fourier transform in the limit L
F
1
{F} = f(x) =
1

2
_

F(k)e
ikx
dk
Notice the beautiful symmetry between the Fourier and inverse Fourier transforms. This
is the reason we included the factor 1/

2.
The above results are valid only for C
1
functions f (i.e. continuous functions with contin-
uous rst derivative).
4-5
Fouriers Integral Theorem
Theorem 25 (Fouriers Integral Theorem) If f is absolutely integrable on (, )
_

|f(x)| dx <
and piecewise C
1
on every nite subinterval then the Fourier transform
F{f(x)} = F(k) =
1

2
_

f(x) e
ikx
dx
exists and
1
2
_

e
ikx
_

f(y) e
iky
dy dk =
1
2
_
f(x+) +f(x)
_
Proof: See text.
For continuous functions the right hand side is simply f(x). In this case Fouriers integral
theorem takes the form
F
1
{F{f}} = f(x)
Notice that it is a matter of taste how one actually denes the Fourier transform. One
could, for example, dene the pair
F(k) =
_

f(y)e
iky
dy, f(x) =
1
2
_

F(k)e
ikx
dk
4-6
Example Fourier Transforms
Example: Consider
f(x) =
_
_
_
V
0
, |x| <
0, |x| >
In this case
F(k) =
1

2
_

V
0
e
ikx
dx =
1

2
_
V
0
ik
e
ikx
_

=
2V
0
sin(k)
k

Example: Consider
f(x) =
a
a
2
+x
2
, a > 0
In this case
F(k) =
a

2
_

e
ikx
a
2
+x
2
dx
This integral can be evaluated by closing the contour in the upper half-plane for k 0 and
in the lower half-plane for k < 0. The result is
F(k) =
_

2
e
|k|a

4-7
Gaussians
Example: Consider the normalized Gaussian
f(x) =
_

e
x
2
with multiplicative constants chosen so that
_

f(x) dx = 1. By completing the square


x
2
+ikx =
_
x
ik
2
_
2

k
2
4
and changing variables to z = x ik/2a we have
F(k) =
1

2
_

e
x
2
e
ikx
dx =
1

2
e
k
2
/4
_

e
(xik/2)
2
dx
=
1

2
e
k
2
/4
_
ik/2
ik/2
e
z
2
dz
ik/2
This is interpreted as the integral along the complex line Imz = k/2. Deforming the
contour (through a region of analyticity of e
z
2
) back to the real line and using
_

e
x
2
dx =
_

gives the result


F(k) =
1

2
e
k
2
/4
The Fourier transform of a Gaussian is a Gaussian. For large , f(x) is sharply peaked
around the origin, whereas F(k) is relatively at and vice versa.
4-8
Dirac Delta Function
Return to Fouriers integral theorem and assume that f is C
1
and therefore continuous.
Now formally interchange the orders of integration with respect to y and k. This gives
f(x) =
1
2
_

e
ikx
_

f(y) e
iky
dy dk =
_

f(y)
_
1
2
_

e
ik(yx)
dk
_
dy =
_

f(y)(y x)dy
where
(x) =
1
2
_

e
ikx
dk
Although this improper integral does not exist the so-called Dirac delta function so dened
can be formally manipulated as a continuum version of the Kronecker delta.
Proceeding formally (as the physicist Dirac didnt), one can think of (x) as a generalized
function which is zero everywhere except at the origin where it is innite in such a way that
_

f(x)(x) dx = f(0)
In a strict mathematical sense one should regard (x) as a generalized function which is
dened as a limit of a sequence of nice functions such as Gaussians.
Exercise: Show that
lim

g(x)
_

e
x
2
dx = g(0)
or in the sense of generalized functions
(x) = lim

e
x
2
4-9
Dirac Delta Function Examples
Example: Find the Fourier transforms in terms of generalized functions of (a) f(x) = 1, (b)
f(x) = e
iax
, (c) f(x) = cos x, (d) f(x) = sinx.
Solution: The Fourier transforms are as follows:
(a) F{1} =
1

2
_

e
ikx
dx =

2 (k).
(b) F{e
iax
} =
1

2
_

e
iax
e
ikx
dx =
1

2
_

e
i(k+a)x
dx =

2 (k +a).
(c) F{cos ax} =
1

2
_

1
2
(e
iax
e
iax
)e
ikx
dx =
_

2
((k).
(d) F{sinax} =
1

2
_

e
ikx
dx =

2 (x).
4-10
Week 5: Fourier Transforms II
13. Properties of Fourier transforms
14. Parsevals theorem, Fourier convolution theorem
15. Applications of Fourier transforms
Fourier (18381922) Erik Ivar Fredholm (18661927)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
5
Fourier Transform Properties I
Linearity. If a and b are constants
F{af(x) +bg(x)} = a F{f(x)} +b F{g(x)}
Conjugate. Assume f is real (f : R R), the complex conjugate of the Fourier transform is
F(k) =
1

2
_

f(x) e
ikx
dx =
1

2
_

f(x)e
ikx
dx = F(k)
Even Functions. If f is even [f(x) = f(x)] it follows that
F(k) =
1

2
_

0
f(x)e
ikx
dx +
1

2
_
0

f(x)e
ikx
dx
=
1

2
_

0
f(x)e
ikx
dx +
1

2
_

0
f(x)e
ikx
dx =

_

0
f(x) cos(kx) dx = F
C
{f}
Odd Functions. If f is odd [f(x) = f(x)] we have
iF(k) =

_

0
f(x) sin(kx) dx = F
S
{f}
The right hand sides above are called the Fourier cosine transformation and Fourier sine
transformation, denoted respectively by F
C
{f} and F
S
{f}. From the basic inversion formula,
it is not dicult to show that for a function dened only on [0, )
f(x) =

_

0
F
C
{f} cos(kx) dk =

_

0
F
S
{f} sin(kx) dk
5-1
Fourier Transform Properties II
Attenuation. Let F(k) = F{f(x)} be the Fourier transform of f. Then we note that
F{f(x)e
ax
} =
1

2
_

f(x)e
ax
e
ikx
dx =
1

2
_

f(x)e
i(kia)x
dx = F(k ia)
Shift. Similarly, on changing variables to y = x a,
F{f(x a)} =
1

2
_

f(x a)e
ikx
dx =
1

2
_

f(y)e
ik(y+a)
dy
=
e
ika

2
_

f(y)e
iky
dy = e
ika
F(k)
Derivatives. Consider the Fourier transform of f

(x) and integrate by parts


F{f

(x)} =
1

2
_

(x)e
ikx
dx
=
_
1

2
f(x)e
ikx
_

2
_

f(x)ik e
ikx
dx = ik F{f(x)}
where we have used the fact that, since f(x) is assumed to have a Fourier transform, f(x)
must vanish as x . Replacing f by f

and so on now gives


F{f

(x)} = ikF{f

} = (ik)
2
F{f} = k
2
F{f}, etc.
These properties are useful in applications to dierential equations since the operation of
dierentiation with respect to x on f(x) is converted to the algebraic operation of multipli-
cation (by ik) on the Fourier transform.
5-2
Parsevals Theorem
Theorem 26 (Parsevals Theorem) Suppose that f(x) is absolutely integrable and square
integrable. Then the Fourier transform F(k) = F{f} is square integrable and
_

|F(k)|
2
dk =
_

|f(x)|
2
dx
Proof: To prove this result we consider a more general situation involving two functions f
and g and their Fourier transforms F(k) and G(k). By denition
G(k) =
1

2
_

g(x)e
ikx
dx
Interchanging orders of integration (which is justied in this case) gives
_

F(k)G(k) dk =
1

2
_

F(k)
__

g(x)e
ikx
dx
_
dk
=
_

g(x)
_
1

2
_

F(k)e
ikx
dk
_
dx =
_

g(x)f(x) dx
Parsevals theorem is then obtained by choosing g(x) = f(x) and noting that in this case
G(k) = F(k).
The results above were rst proved by Plancherel so this theorem is often called Plancherels
theorem.
5-3
Convolution Theorem
Theorem 27 (Convolution Theorem) Let F(k) and G(k) denote the Fourier transforms
of f and g. If h is the Fourier convolution of f and g, dened by
h(x) = (f g)(x) :=
1

2
_

f(y)g(x y) dy
then
F{h(x)} = F{f g} = F(k)G(k) = F{f} F{g}
Proof: We denote the inverse Fourier transform by F
1
and interchange the order of inte-
gration
F
1
{F(k)G(k)} =
1

2
_

F(k)G(k)e
ikx
dk
=
1

2
_

G(k)e
ikx
_
1

2
_

f(y)e
iky
dy
_
dk
=
1

2
_

f(y)
_
1

2
_

G(k)e
ik(xy)
dk
_
dy
=
1

2
_

f(y)g(x y) dy
which gives the required result.
5-4
Fourier Transform Summary
F{f} F(k) =
1

2
_

f(x)e
ikx
dx
F
C
{f} =
_
2

_

0
f(x) cos(kx) dx, F
S
{f} =
_
2

_

0
f(x) sin(kx) dx
(f g)(x) =
1

2
_

f(y)g(x y) dy
Delta Function (x) =
1
2
_

e
ikx
dk
Conjugation F(k) = F(k)
Even function F(k) = F
C
{f}
Odd function F(k) = iF
S
{f}
Shift F{f(x a)} = e
ika
F(k)
Attenuation F{f(x)e
ax
} = F(k ia)
Dierentiation F{f
(n)
(x)} = (ik)
n
F(k)
Fourier Convolution F{(f g)(x)} = F{f}F{g}
5-5
Laplace Equation
Example: Consider the Laplace equation on the innite strip

2
u
x
2
+

2
u
y
2
= 0, < x < , 0 y 1
subject to the boundary conditions
u(x, 0) = 0, u(x, 1) = u
0
(x)
Dene the Fourier transform of u(x, y) with respect to x by
F(k, y) =
1

2
_

u(x, y)e
ikx
dx
Note that
1

2
_

2
u
x
2
e
ikx
dx = k
2
F(k, y)
Multiplying the Laplace equation by e
ikx
and integrating with respect to x gives
0 = k
2
F(k, y) +
1

2
_

2
u
y
2
e
ikx
dx
= k
2
F(k, y) +

2
y
2
_
1

2
_

u(x, y)e
ikx
dx
_
= k
2
F(k, y) +

2
F
y
2
With k xed this dierential equation has the general solution
F(k, y) = A(k) sinh(ky) +B(k) cosh(ky)
5-6
From the boundary conditions we have
F(k, 0) = B(k) =
1

2
_

u(x, 0)e
ikx
dx = 0
Similarly,
F(k, 1) = A(k) sinhk =
1

2
_

u
0
(x)e
ikx
dx = U(k)
Where U(k), the Fourier transform of u
0
(x), is considered known. Solving this for A(k) gives
F(k, y) = U(k)
sinh(ky)
sinhk
The required solution u(x, y) is then the inverse Fourier transform
u(x, y) =
1

2
_

U(k)
sinh(ky)
sinhk
e
ikx
dk
5-7
Diusion Equation
Example: Consider the diusion equation on an innite line
r
t
= D

2
r
x
2
, < x < , t 0
for the concentration or linear density r(x, t) of a diusing substance as a function of position
x and time t. Assume that the total concentration (mass)
_

r(x, t) dx = M
remains xed, independent of t, and at t = 0 the substance is concentrated at x = 0
r(x, 0) = M(x)
where is the Dirac delta function.
Dene the Fourier transform
R(k, t) =
1

2
_

r(x, t)e
ikx
dx
Taking the Fourier transform of the diusion equation gives
R
t
=
1

2
_

r
t
e
ikx
dx =
1

2
_

D

2
r
x
2
e
ikx
dx = k
2
DR
Solving the rst order ODE with k xed gives
R(k, t) = R(k, 0)e
k
2
Dt
5-8
Note, by using the Fourier transform we have implicitly restricted to solutions for which
r(x, t) 0 as x (in order that the transform exists). If you want solutions without this
restriction you cannot use Fourier transforms.
From the initial condition we have
R(k, 0) =
1

2
_

M(x)e
ikx
dx =
M

2
R(k, t) =
M

2
e
k
2
Dt
and using the known results for Gaussians
_

e
x
2
dx =
_

it follows (after completing the square and shifting the contour in the complex plane) that
r(x, t) =
1

2
_

R(k, t)e
ikx
dk =
M
2
_

e
k
2
Dtikx
dk
=
M
2
e
x
2
/4Dt
_

e
Dt(k+
ix
2Dt
)
2
dk =
M

4Dt
e
x
2
/4Dt
It is easily checked that the total concentration is M and, moreover, from the properties of
the delta function
lim
t0
+
r(x, t) = M(x)
in the sense of generalized functions.
5-9
Fredholm Integral Equation
Consider the Fredholm integral equation
f(x) = g(x) +
_

K(x y)f(y) dy = g(x) +

2 (K f)(x)
where g and the kernel K are given and is some constant.
Taking the Fourier transform and noting that the integral on the right is

2 times the
convolution of f and K we obtain
F(k) = G(k) +

2

K(k)F(k)
where F, G and

K denote the Fourier transforms of f, g, and K respectively. Solving for
F(k) and using the Fourier inversion formula gives
f(x) =
1

2
_

_
G(k)
1

2

K(k)
_
e
ikx
dk
Example: Consider
f(x) = e
|x|
+
_

e
|xy|
f(y) dy
An elementary calculation shows that
F{e
|x|
} =
1

2
_

0
e
x+ikx
dx +
1

2
_
0

e
x+ikx
dx
=
1

2
_
1
(1 +ik)
e
(1+ik)x
_

0
+
1

2
_
1
(1 +ik)
e
(1+ik)x
_
0

=
1

2
_

1
(1 +ik)
+
1
(1 +ik)
_
=

1
1 +k
2
5-10
In this example
G(k) =

K(k) = F{e
|x|
} =

1
1 +k
2
so
F(k) = G(k) +

2 G(k)F(k) (1 +k
2
)F(k) =

+2F(k)
Therefore
f(x) =
1

e
ikx
(1 2) +k
2
dk =
1

e
ikx
(1 2) +k
2
dk
But we have seen previously that
1

e
ikx
a
2
+x
2
dx =
e
|k|a
a
, a = 0
Consider the case < 1/2 and set a = (1 2)
1/2
> 0. We then have
f(x) = (1 2)
1/2
exp[|x|(1 2)
1/2
]
Exercise: The solution in the remaining two cases is left as an exercise. For > 1/2 there
are two poles on the real axis and thus the contour has to be indented. For = 1/2 the two
poles join to give one second order pole at k = 0.
5-11
Fraunhofer Diraction
Fourier transforms are used in physical optics to study the Fraunhofer (far-eld) diraction
patterns of monochromatic light passing through apertures or slits given by f(x). The Fourier
transform F(k) gives the (far-eld) amplitude of the diracted light. The (far-eld) intensity
of the light is proportional to |F(k)|
2
. For a slit of width 2a, the angle subtended at the
slit corresponding to the rst intensity minimum is =

2a
where is the wavelength of the
light.
Example:
(a) Find the Fourier transform of the single-slit function
f(x) =
_
_
_
1, |x| < a
0, |x| > a
(b) Graph f(x) and its Fourier transform F(k) for a = 3.
(c) Use the result of (a) to evaluate the improper real integral
I(x) =
_

sinka cos kx
k
dk
for all values of x.
(d) Deduce the value of the improper real integral
J =
_

0
sink
k
dk
5-12
Fraunhofer Diraction Solution
Solution: (a) The Fourier transform (amplitude of diracted light wave) is
F(k) =
1

2
_

f(x)e
ikx
dx =
1

2
_
a
a
(1)e
ikx
dx =
1

2
e
ikx
ik

a
x=a
=

sinka
k
, k = 0
with F(0) =
_
2

a. The intensity of light is proportional to |F(k)|


2
.
(b) For a = 3, the graphs of f(x) (blue) and its
Fourier transform F(k) (red) are as shown.
(c) From Fouriers integral theorem
F
1
{F(k)} =
1
2
[f(x+) +f(x

)], that is,


1

2
_

sinka
k
e
ikx
dk =
_

_
1, |x| < a
1
2
, |x| = a
0, |x| > a
-4 -2 2 4
-1
-0.5
0.5
1
1.5
2
2.5
3
Hence
I(x) =
_

sinka cos kx
k
dk = Re
_

sinka
k
e
ikx
dk =
_

_
, |x| < a
1
2
, |x| = a
0, |x| > a
(d) Setting x = 0 and a = 1 in the result for I(x) gives
J =
_

0
sink
k
dk =
1
2
_

sink
k
dk =
1
2

5-13
Week 6: Laplace Transforms
16. Laplace transforms, inverse Laplace transforms
17. Properties of Laplace transforms, convolution, applications
18. MID-SEMESTER TEST
Pierre-Simon Laplace (17491827)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
6
Laplace Transform
Denition: A real function f(x) is of exponential order c 0 if there exist constants M > 0
and K > 0 such that
|f(x)| M e
c x
, x K
If f(x) is of exponential order, we dene a constant c
0
by
c
0
= inf{c : f(x) is of exponential order c}
Suppose that f(x) is of exponential order and f(x) = 0 for x < 0. In these circumstances
it is natural to consider the Laplace transform of f dened by the inmum
L{f} := L(p) =
_

0
f(x)e
px
dx, Re p > c
0
If f(x) is of exponential order and piecewise continuous with
only nite (not innite) jump discontinuities, then its Laplace
transform exists for all Re p > c
0
. This is a sucent condition
for the existence of the Laplace transform and not a necessary
condition.
Typically, L(p) is analytic in Re p > c
0
with some singularity
on the line Re p = c
0
.
Re p = c
L(p) analytic
c
0
0
6-1
Inverse Laplace Transform
To obtain an inversion formula for f in terms of L we begin with Fouriers integral theorem
for g, noting that by assumption g(x) = 0 for x < 0
2g(x) =
_

e
ikx
__

0
g(y)e
iky
dy
_
dk
Changing variables from k to p = ik +c (i.e. ik = p +c) gives
2g(x) = i
_
c+i
ci
e
(pc)x
__

0
g(y)e
(pc)y
dy
_
dp
Setting f(x) = e
cx
g(x) and using exponential order gives
f(x) =
1
2i
_
c+i
ci
e
px
__

0
f(y)e
py
dy
_
dp
Re p = c
L(p) analytic
c
0
0
That is, from the denition
f(x) =
1
2i
_
c+i
ci
L(p)e
px
dp, c > c
0
This inverse Laplace transform holds for any real c > c
0
(by deforming the contour through
the region of analyticity). The contour passes to the right of any singularities on Re p = c
0
.
The complex integral for the inverse Laplace transform is often called the Bromwich
integral formula.
6-2
Example Laplace Transform
Example: Consider
f(x) = e
x
, x > 0
In this example c
0
= 1. The Laplace transform is given by
L(p) =
_

0
e
x
e
px
dx =
_
e
(1p)x
1 p
_

0
=
1
p 1
, Re p > 1
Note that in fact L(p) is dened and analytic, for all p, except at p = 1 where it has a simple
pole. By the Laplace inversion formula
e
x
=
1
2i
_
c+i
ci
e
px
p 1
dp, x > 0, c > 1
Case (i) x < 0: To see f(x) = 0 when x < 0 is automatically incorpo-
rated in the inversion formula we close the contour Re p = c in the right
half-plane and consider the integral of L(p)e
px
around this contour C.
Since the integral is analytic inside C, we have by Cauchy
0 =
_
C
L(p)e
px
dp =
_
c+iR

ciR

L(p)e
p|x|
dp +
_
C
R
L(p)e
p|x|
dp, x < 0
where R

=
_
R
2
c
2
. The integral around C
R
vanishes as R by
Jordans lemma so
_
c+i
ci
L(p)e
px
dp = 0, x < 0
c
C
R
+iR
iR
c + iR
c iR
1
6-3
Case (ii) x > 0: The inversion integral is obtained by closing the
contour on the left. Jordans lemma applies in the second and third
quadrants. The integral around the left segment of the circle vanishes
when x > 0 by an extension of Jordans Lemma (below) so
f(x) =
1
2i
_
c+i
ci
L(p)e
px
dp =

k
Res(z
k
)
The sum is over the residues of all the singularities z
k
of L(p) e
pz
, which
lie to the left of Re(p) = c including any singularities on Re p = c
0
. For
this example we note that the integrand has a simple pole at p = 1 < c
with residue
c
C
R
+iR
iR

R
1
Res(1) =
_
(p 1)
e
px
(p 1)
_
p=1
= e
x
Extended Jordans lemma. Suppose that x > 0 and |L(p)| M
R
0 as R as in
Jordans lemma. We estimate the contribution from , the part of C
R
in the rst quadrant
with p = Re
i
. If = /2 and sin = c/R then

L(p)e
px
dp

|L(p)| |e
px
| |dp| M
R
_
/2
/2
e
xRcos
Rd RM
R
_
Arcsin(c/R)
0
e
xRsin
d
RM
R
e
xR(c/R)
Arcsin
_
c
R
_
= cM
R
e
cx
Arcsin(
c
R
)
c
R
0, R
since

sin
1 as 0. Similarly for the part of C
R
in the fourth quadrant.
6-4
Case (iii) x = 0: In keeping with Fouriers integral theorem, for c > 1 we nd
1
2i
_
c+i
ci
dp
p 1
=
1
2i
Log(p 1)

c+i
ci
=
1
2i
lim
R
_
Log[(c 1) +iR] Log[(c 1) iR]
_
=
1
2i
_
1
2
i (
1
2
i)
_
=
1
2
,
Example: The Laplace transform of the delta function (x x
0
) for x
0
> 0 is
L(p) =
_

0
(x x
0
)e
px
dx = e
px
0
The inversion formula gives an integral representation
(x x
0
) =
1
2i
_
c+i
ci
e
p(xx
0
)
dp, c > 0
6-5
More Laplace Examples
Example: Evaluate the Laplace transform of f(x) = x
n
.
Solution: Changing variables to y = px we obtain
L(p) =
_

0
x
n
e
px
dx = p
(n+1)
_

0
y
n
e
y
dy =
n!
p
n+1
, Re p > 0
where we used
n! =
_

0
y
n
e
y
dy
The inversion formula is easily checked by noting that the integrand in the inversion
formula, n!e
px
/p
n+1
, has a pole of order n +1 at the origin with residue
Res(0) =
1
n!
d
n
dp
n
(n!e
px
)

p=0
= x
n
Example: Find the Laplace transform of
f(x) = exp(iwx), x > 0, w real
Solution: The Laplace transform, which exists for Re p > 0, is given by
L(p) =
_

0
e
(piw)x
dx =
_

e
(piw)x
p iw
_

x=0
=
1
p iw
So any c > 0 in the inversion formula suces. Taking real and imaginary parts gives
L{cos(wx)} =
p
p
2
+w
2
, L{sin(wx)} =
w
p
2
+w
2

6-6
Laplace Transform Properties I
Linearity. If a and b are constants
L{af(x) +bg(x)} = a L{f(x)} +b L{g(x)}
Shift. The shift property for Laplace transforms is
L{f(x)e
ax
} =
_

0
f(x)e
ax
e
px
dx = L(p +a)
Derivative. If we integrate by parts
L{f

(x)} =
_

0
f

(x)e
px
dx =
_
f(x)e
px
_

0
+p
_

0
f(x)e
px
dx = f(0) +p L{f(x)}
So, apart from the additive term f(0), the operation of dierentiation on f is equivalent to
algebraic operations on L{f}. Similarly, replacing f by f

, gives
L{f

(x)} = f

(0) +p L{f

(x)} = f

(0) pf(0) +p
2
L{f(x)}
and so on. These properties are useful in solving ODEs.
Integral. Consider an indenite integral of the form
F(x) =
_
x
0
f(u) du
Noting that F

(x) = f(x), integration by parts gives


L{F(x)} =
_

0
F(x)e
px
dx =
1
p
_

0
F(x)
d
dx
(e
px
) dx
=
1
p
_
F(x)e
px
_

0
+
1
p
_

0
F

(x)e
px
dx =
1
p
L{f(x)}
In the last step we used F(0) = 0. The quantities in square brackets vanish as x at
least for Re p larger than some real positive number c.
6-7
Laplace Transform Properties II
Attenuation. Let
f(x) =
_
_
_
g(x a), x a
0, x < a
Then the attenuation property for Laplace transforms is
L{f} = exp(pa)L{g}
The above properties are used to build up tables of Laplace transforms . For example,
given L{cos wx} = p/(p
2
+w
2
), we deduce from the shift property that
L{e
ax
cos wx} =
p +a
(p +a)
2
+w
2
Similarly, from the derivative property, we nd that
L{sinwx} =
1
w
L
_
d
dx
cos wx
_
=
1
w

p
w
p
p
2
+w
2
=
w
p
2
+w
2
Finally, from the integral property, we nd that
L
_
2sin
2
(
1
2
wx)
w
_
= L
_
_
x
0
sin(wu) du
_
=
1
p
L{sinwx} =
w
p(p
2
+w
2
)
6-8
Laplace Convolution Theorem
Theorem 28 (Laplace Convolution Theorem)
L{f g} = L{f}L{g}
where f g denotes the Laplace convolution of f and g dened by
(f g)(x) =
_
x
0
f(u)g(x u) du
Proof: To prove this theorem we apply the inversion integral to L{f}L{g} to obtain
1
2i
_
c+i
ci
L{f}L{g}e
px
dp =
1
2i
_
c+i
ci
__

0
f(u)e
pu
du
_
L{g}e
px
dp
=
_

0
f(u)
_
1
2i
_
c+i
ci
L{g}e
p(xu)
dp
_
du
The required result then follows by noting that
1
2i
_
c+i
ci
L{g}e
p(xu)
dp =
_
_
_
g(x u), x u > 0
0, x u < 0
In other words, the right hand side above is equal to (f g)(x).
Note, the Laplace convolution is dierent to the Fourier convolution.
6-9
Laplace Transform Summary
L{f} = L(p) :=
_

0
f(x)e
px
dx
(f g)(x) =
_
x
0
f(u)g(x u) du
Shift L{f(x)e
ax
} = L(p +a)
Derivative L{f

(x)} = pL(p) f(0)


L{f

(x)} = p
2
L(p) pf(0) f

(0)
Integral L
_
_
x
0
f(u) du
_
=
1
p
L{f(x)}
Attenuation L{H(x a)g(x a)} = exp(ap)L{g}
Laplace Convolution L{(f g)(x)} = L{f}L{g}
Here H(x) is the Heaviside step function H(x) =
_
_
_
1, x > 0
0, x < 0
6-10
Laplace Transforms and ODEs
Example: Consider the second order ODE
d
2
f
dx
2
+f = 0
with f(0) and f

(0) given. Taking Laplace transforms of both sides gives


_
f

(0) pf(0) +p
2
L{f}
_
+L{f} = 0
That is
L{f} =
f

(0)
1 +p
2
+
pf(0)
1 +p
2
= f

(0)L{sinx} +f(0)L{cos x}
It follows immediately that f(x) = f

(0) sinx +f(0) cos x


Example: Consider the rst order ODE
df
dx
+f = x
2
e
x
with f(0) given. Taking Laplace transforms gives
_
f(0) +pL{f}
_
+L{f} = L{x
2
e
x
} =
2
(1 +p)
3
It follows that
L{f} =
2
(1 +p)
4
+
f(0)
(p +1)
from which one easily deduces, by either performing the two inversion integrals, or by
consulting tables of Laplace transforms, that
f(x) =
1
3
x
3
e
x
+f(0)e
x

6-11
Forced Harmonic Oscillator
Example: Consider the forced harmonic oscillator, with displacement x(t) a function of time
t, in the presence of a forcing term f(t). The dierential equation governing the motion is
d
2
x
dt
2
+w
2
x = f(t)
Assume f is given and let
F(p) =
_

0
f(t)e
pt
dt, X(p) =
_

0
x(t)e
pt
dt
Taking Laplace transforms
_
v
0
px
0
+p
2
X(p)
_
+w
2
X(p) = F(p)
where x
0
= x(0) and v
0
= x

(0) are the initial displacement and velocity. Solving gives


X(p) =
px
0
p
2
+w
2
+
v
0
p
2
+w
2
+
F(p)
p
2
+w
2
= x
0
L{cos wt} +
v
0
w
L{sinwt} +L{f(t)} L
_
sin(wt)
w
_
Using the convolution theorem then gives
x(t) = x
0
cos wt +
v
0
w
sinwt +
_
t
0
f(u)
sin(w(t u))
w
du
6-12
Laplace Transform and Heat Equation
Example: As an example of a PDE, consider the heat equation
T
t
= a
2

2
T
x
2
, a > 0, 0 x <
T(x, t) is the temperature at position x and time t of a semi-innite conducting rod with
initial and boundary conditions
T(x, 0) = 0, x 0; T(0, t) = T
0
, t > 0
and
T(x, t) 0 as x t > 0
Taking the Laplace transforms with respect to t gives
pL(x, p) = a
2

2
L
x
2
, L(x, p) =
_

0
T(x, t)e
pt
dt
Of the two fundamental solutions
L

(x, p) = exp(xp
1/2
/a)
we reject the positive exponent solution since it violates the decay at innity. Hence
L(x, p) = L(0, p) exp(xp
1/2
/a)
where
L(0, p) = T
0
_

0
e
pt
dt = T
0
_

e
pt
p
_
=
T
0
p
, Re p > 0
6-13
Applying the Laplace inversion formula we nd the solution in the form
T(x, t) =
T
0
2i
_
c+i
ci
p
1
exp(xp
1/2
/a +pt) dp, c > 0
To evaluate the integral we close the contour to the left. Note, however, that the integrand
has a simple zero and a branch point at p = 0. We are thus led to consider a contour C that
detours around the branch cut on the negative real axis.
C
R
c + iR
A
D
B
C
C
R
c iR
branch cut
.
Cr
The integrand is analytic inside C so from Cauchy
0 =
_
C
f(p) dp =
_
c+iR
ciR
f(p) dp +
_
C
R
f(p) dp +
_
AB
f(p) dp +
_
CD
f(p) dp +
_
C
r
f(p) dp
The integral around C
R
clearly vanishes as R from the extended Jordans lemma. Also
from Limiting Contour IV we have
lim
r0
_
C
r
f(p) dp = 2i Res(0) = 2i
6-14
Since we are forced to introduce the branch cut along the negative real axis we chose in
the limit R , p = u, p
1/2
= iu
1/2
with u : 0 on AB and p = u, p
1/2
= iu
1/2
with
u : 0 on CD. Combining the above results it follows that
T(x, t) = T
0
_
1
1
2i
_
0

u
1
exp(ixu
1/2
/a ut) du
1
2i
_

0
u
1
exp(ixu
1/2
/a ut) du
_
= T
0
_
1
1

_

0
u
1
e
ut
sin(xu
1/2
/a) du
_
To simplify the integral we substitute u = y
2
to obtain
I =
1

_

0
u
1
e
ut
sin(xu
1/2
/a) du =
2

_

0
e
ty
2
_
y
1
sin
xy
a
_
dy
=
2

_

0
e
ty
2
__
x/a
0
cos(uy) du
_
dy =
2

_
x/a
0
__

0
e
ty
2
cos(uy) dy
_
du
=
1

_
x/a
0
__

e
ty
2
+iuy
dy
_
du =
1

t
_
x/a
0
e
u
2
/4t
du
=
2

_
x/2a

t
0
e
z
2
dz = erf(x/2a

t)
where the error function erf(x) is dened by
erf(x) =
2

_
x
0
e
u
2
du
Substituting back we obtain the nal result
T(x, t) = T
0
_
1 erf(x/2a

t)
_

6-15
A Dierential-Dierence Equation
Example: Consider the dierential-dierence equation
d
2
y
dx
2
y(x 1) = x, x 0
subject to the conditions y

(0) = 0 and y(x) = 0 for x 0. Since y(x) = 0 for x 0,


L{y(x 1)} =
_

0
y(x 1)e
px
dx =
_

1
y(x 1)e
px
dx =
_

0
y(u)e
p(u+1)
du = e
p
L{y(x)}
Taking Laplace transforms gives
(p
2
e
p
)L{y(x)} = p
2
Using the fact that
1
p
2
e
p
=
p
2
1 p
2
e
p
=

n=0
e
pn
p
(2n+2)
and the Laplace inversion integral we nd for c > 0
y(x) =
1
2i
_
c+i
ci
p
2
(p
2
e
p
)
1
e
px
dp =

n=0
1
2i
_
c+i
ci
e
p(xn)
p
(2n+4)
dp
The last integral vanishes when x n < 0. Closing the contour to the left and noting that
the integrand has a pole of order 2n +4 at the origin, shows that for x n > 0
1
2i
_
c+i
ci
e
p(xn)
p
(2n+4)
dp =
1
2n +3
d
2n+3
dp
2n+3
_
e
p(xn)
_

p=0
=
(x n)
2n+3
(2n +3)!
6-16
It then follows that
y(x) =

xn>0
(x n)
2n+3
(2n +3)!
=
x

n=0
(x n)
2n+3
(2n +3)!
where x denotes the integral part of x, i.e. the largest integer which does not exceed x.
Written out explicitly, the solution is
y(x) =
_

_
x
3
/3! 0 x < 1 x = 0
x
3
/3! +(x 1)
5
/5! 1 x < 2 x = 1
x
3
/3! +(x 1)
5
/5! +(x 2)
7
/7! 2 x < 3 x = 2
.
.
.
which in fact can be obtained directly by solving the equation successively in the above
sequence of intervals.
6-17
Week 7: Mellin Transforms
19. Mellin transforms, inverse Mellin transforms
20. Gamma and zeta functions
21. Applications of Mellin transforms
Robert Hjalmar Mellin (18541933) Georg Friedrich Bernhard Riemann (18261866)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
7
Mellin and Inverse Mellin Transform
In deriving the inverse Laplace transform we assumed that f(x) = 0 for x < 0 and that
f(x) is of exponential order c for x > 0. If we drop the restriction f(x) = 0 for x < 0 and
dene the two-sided Laplace transform by
L(p) =
_

f(x)e
px
dx
repetition of the inversion argument shows that
f(x) =
1
2i
_
c+i
ci
L(p)e
px
dp
For L(p) to exist we require f(x) to approach innity slower than exp(c|x|) as |x|
(ie. f(x) exp(p|x|) 0 as |x| ). This in general restricts Re p to a vertical strip in
the complex p-plane. It follows that L(p) may have singularities to the right and left of
the line Re p = c. To express these integrals as Mellin transforms we make the exponential
substitution y = exp(x)
L(p) =
_

0
f(logy)y
p1
dy, f(logx) =
1
2i
_
c+i
ci
L(p)x
p
dp
Denition: In terms of F(x) f(logx) we dene the Mellin transform by
M(p) =
_

0
F(x)x
p1
dx
so that the inverse Mellin transform is
F(x) =
1
2i
_
c+i
ci
M(p)x
p
dp
7-1
Example Mellin Transform
Example: Consider F(x) = exp(ax). Setting x = y/a, the Mellin transform is
M(p) =
_

0
e
ax
x
p1
dx = a
p
_

0
y
p1
e
y
dy = a
p
(p), Re p > 0
where (p) is the gamma function. So Mellin inversion gives
e
ax
=
1
2i
_
c+i
ci
(p)a
p
x
p
dp
Mellin transforms are very useful in generating asymptotic series. They can also be used
to sum convergent series using the fact that

n=1
F(n) =
1
2i
_
c+i
ci
M(p)(p) dp
where M(p) is the Mellin transform of F(x) and
(p) =

n=1
n
p
is the Riemann zeta function.
7-2
Gamma Function
Denition: The gamma function is dened by
(p) =
_

0
x
p1
e
x
dx, Re p > 0
Integrating by parts gives
(p) =
_

0
x
p1
d
dx
(e
x
) dx =
_
x
p1
e
x
_

0
+(p 1)
_

0
x
p2
e
x
dx
= (p 1) (p 1), provided Re p > 1
From this integral expression, (p) has no branch points and is analytic for Re p > 0.
This recursion determines (p) for p N. Iteration gives
(p) = (p 1)(p 2)(p 3) . . . (3)(2)(1) = (p 1)!
since (1) = 1. Similarly, if p is half an odd integer, iteration shows that (p) is a multiple of
(1/2) =
_

0
x
1/2
e
x
dx = 2
_

0
e
y
2
dy =

, x = y
2
Iterating the reverse recursion relation is used to analytically continue (p) to Re p 0
(p) = (p +1)/p (p) =
(p +n +1)
p(p +1)(p +2) (p +n)
, p = 0, 1, 2, 3, . . .
It follows that (p) is analytic everywhere in C except for simple poles at p = 0, 1, 2, . . ..
7-3
Graph of the Gamma Function
A graph of (p) for real p:
-4 -2 2 4
-4
-2
2
4
7-4
Beta Function
Denition: A related function is the beta function B(r, s) dened by
B(r, s) =
_
1
0
u
r1
(1 u)
s1
du
Consider the product
(r)(s) =
_

0
x
r1
e
x
dx
_

0
y
s1
e
y
dy
as a double integral over the rst quadrant in the x-y plane. Substituting x +y = u we nd
(r)(s) =
_

0
e
u
__
u
0
x
r1
(u x)
s1
dx
_
du
=
_

0
u
r+s1
e
u
du
_
1
0
t
r1
(1 t)
s1
dt
= (r +s)B(r, s)
In the second step we substituted x = ut, dx = udt.
In particular, since (1) = 1
(p)(1 p) = B(p, 1 p) =
_
1
0
u
p1
(1 u)
p
du
Or, after substituting u = x(1 +x)
1
,
(p)(1 p) =
_

0
x
p1
(1 +x)
1
dx
This integral can be evaluated by considering the integral of
z
p1
(1 +z)
1
(0 < Re p < 1) around the contour C.

1
branch cut
z =x
z =x e
p-1 p-1
p-1 p-1
2ip
.
7-5
Reection Formula
Using residues and theorems on limiting contours I/III (Uniformity on an Arc) we nd
_
C
z
p1
(1 +z)
1
dz = 2i Res(1) = 2i e
i(p1)
= 2i e
ip
=
_

0
x
p1
(1 +x)
1
dx +
_
0

x
p1
e
2ip
(1 +x)
1
dx
= (e
2ip
1)
_

0
x
p1
(1 +x)
1
dx
That is, on rearranging,
_

0
x
p1
(1 +x)
1
dx =

sinp
The nal result is called the reection formula
(p)(1 p) =

sinp
Although we derived the reection formula for 0 < Re p < 1, it extends straightforwardly
to all p = 1 using the recursion relation for the function.
Using the fact that / sin(z) has simple poles at z = n, n = 0, 1, 2. . . with residues
(1)
n
, the reection formula shows that (p) has simple poles at p = n, n = 0, 1, 2, . . . with
residues
(1)
n
(1 +n)
=
(1)
n
n!
7-6
Zeta Function
The Riemann zeta function is dened by
(p) =

n=1
n
p
, Re p > 1
and is analytically continued to Re p > 0 through the alternating Riemann function
(p) =
1
1 2
1p

n=1
(1)
n1
n
p
, Re p > 0
The Riemann zeta function is then analytically continued to Re p 0 by a reection formula
in the form of the Riemann relation
2(p) (p) cos(p/2) = (2)
p
(1 p)
This formula is not obvious and requires an integral representation of the zeta function. It
can then be shown that (p) has no branch points and is an analytic function in the complex
p-plane except for a simple pole at p = 1 with residue equal to unity.
Some particular values of (p) are
(1) =
1
12
, (0) =
1
2
, (1) = , (2) =

2
6
, (4) =

4
90
Riemann Hypothesis. One of the most famous unproved conjectures is the Riemann
hypothesis which states that all the (non-trivial) zeros of (p) lie on the line Re p = 1/2.
7-7
Gamma Function Summary
(p) =
_

0
x
p1
e
x
dx, Re p > 0
Special Values (1) = 1
(1/2) =

Recurrence Relation (p) = (p 1)(p 1)


Reection Formula (p)(1 p) = / sin(p)
Singularities Simple poles at p = 0, 1, 2,
Residues Res(n) = (1)
n
/n! n = 0, 1, 2,
7-8
Zeta Function Summary
Riemann/Euler Formulas
(p) =

n=1
n
p
=

n prime
1
1 n
p
, Re p > 1
Special values (1) = 1/12
(0) = 1/2
(2) =
2
/6
(4) =
4
/90
Riemann Relation 2(p)(p) cos(p/2) = (2)
p
(1 p)
Singularity Simple pole at p = 1, Res(1) = 1
7-9
Mellin Transform Example
Example: Verify the inverse Mellin transform
e
ax
=
1
2i
_
c+i
ci
(p)a
p
x
p
dp, c > 0
Solution: Closing the contour to the left and recalling that the gamma function has simple
poles at p = n, n = 0, 1, 2. . . with residues (1)
n
/n! we obtain, using the residue theorem,
1
2i
_
c+i
ci
(p)(ax)
p
dp =

n=0
Res(p = n) =

n=0
(1)
n
n!
(ax)
n
which is the series for exp(ax).
We have thus established the Mellin transform pair
M{e
ax
} = a
p
(p), Re p > 0; M
1
{a
p
(p)} = e
ax
, x > 0
7-10
A Sum by Mellin Transforms
Example: Sum the series

n=1
F(n) =
1
2i
_
c+i
ci
M(p)(p)dp where F(x) =
cos(ax)
x
2
.
Solution: Cauchys theorem around the rst quadrant gives the Mellin transform of F(x)
M(p) =
_

0
x
p3
cos(ax) dx = a
2p
(p 2) cos(p/2), 2 < Re p < 3
Using the Mellin transform summation formula with c (2, 3) gives

n=1
cos(an)
n
2
=
1
2i
_
c+i
ci
a
2p
(p 2) cos(p/2)(p) dp
=
1
4i
_
c+i
ci
a
2p
(2)
p
(1 p)(p 2)/(p) dp
=
a
2
4i
_
c+i
ci
(2/a)
p
(1 p)
(p 1)(p 2)
dp
We used the Riemann relation and the recurrence relation. Closing the contour to the left
and noting that the only singularities are simple poles at p = 0, 1, 2 we nd

n=1
cos(an)
n
2
=
a
2
2
_
Res(0) +Res(1) +Res(2)
_
=
a
2
2
_
(1)
(1)(2)
+
2(0)
(1)a
+(1)
_
2
a
_
2
_
=
a
2
4

a
2
+

2
6
where we used some properties and special values of the zeta function.
7-11
Week 8: Asymptotic Expansions
22. Landau symbols, divergent series
23. Asymptotic series
24. Watsons lemma
Edmund George Hermann Landau (18771938) George Neville Watson (18861965)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
8
Asymptotic Series
Example: Consider the function
f(x) =
_

0
e
t
1 +xt
dt, x 0
Expanding (1 +xt)
1
in a geometric series and integrating term-by-term gives
f(x) =
_

0
e
t

n=0
(xt)
n
dt =

n=0
(x)
n
_

0
t
n
e
t
dt =

n=0
n!(x)
n
This series diverges for all x = 0 even though the integral exists for all x > 0. Clearly, the
interchange of limits is not valid.
This example was known to Euler (circa 1754) and is an example of an asymptotic series.
To make sense of this let us use the identity
1
1 +xt
=
N

n=0
(xt)
n
+
(xt)
N+1
1 +xt
obtained by rearranging a nite geometric series. The interchange of summation and inte-
gration is now valid giving
f(x) =
N

n=0
n!(x)
n
+R
N
(x)
with the remainder
R
N
(x) =
_

0
(xt)
N+1
1 +xt
e
t
dt
8-1
Estimating the Remainder
For x > 0, the remainder is bounded by
|R
N
(x)|
_

0
(xt)
N+1
e
t
dt = (N +1)! x
N+1
where we used (1 +xt)
1
1 for xt > 0.
It follows from this bound that, when x is small (and positive), the remainder R
N
(x) at
rst decreases in magnitude as N increases. For example if x = 0.01, we have
|R
2
(0.01)| 6 10
6
and |R
3
(0.01)| 24 10
8
Eventually, however, for xed x, the (N +1)! takes over and R
N
(x) diverges as N .
Nevertheless, it follows that adding up the rst few terms of the divergent series gives an
accurate numerical estimate of f(x) at least for small x.
When x = 0.01 summing the rst four terms of the series corresponding to N = 3, we can
compute f(0.01) to an accuracy of less than 24 10
8
! More explicitly, when x = 0.01,

f(x)
3

n=0
n! (x)
n

24 10
8
These are the charactersitic properties of asymptotic series.
8-2
Landau Symbols
Denition: (i) If lim
x
|f(x)|
|(x)|
= 1, then we write
f(x) (x), x
and say f(x) is asymptotic to (x).
(ii) If lim
x
|f(x)|
|(x)|
= 0, then we write
f(x) = o((x)), x
and say f(x) is of order less than (x), or f(x) is little oh of (x) or f(x) is negligible
compared to (x).
(iii) If |f(x)| K|(x)| as x for some constant K, ie.
|f(x)|
|(x)|
is bounded, then we write
f(x) = O((x)), x
and say f(x) is of order not exceeding (x), or f(x) is big oh of (x) or f(x) is dominated
by (x).
Usually, (x) is a simple function such as (x) = x
p
for some p R. These denitions
make sense when x is replaced by z C.
Note, there is nothing special about the point innity in these denitions. The denitions
apply for x going to an arbitrary point eg. x
2
= o(x) as x 0, or sinx = o(1) as x .
8-3
Poincare Asymptotic Series
Denition: (Poincare Asymptotic Series) Let f(x) be a real function and

n=0
a
n
x
n
be a
formal power series (convergent or divergent). Write
f(x) = a
0
+a
1
x +a
2
x
2
+. . . +a
N
x
N
+R
N
(x)
where R
N
(x) is the remainder. If for a xed value of N
R
N
(x) = O(x
N+1
), x 0
then

n=0
a
n
x
n
is said to be an asymptotic series of f(x) and we write
f(x) a
0
+a
1
x +a
2
x
2
+. . .
Example: In the initial example
|R
N
(x)| =

_

0
e
t
1 +xt
dt
N

n=0
n! (x)
n

(N +1)! x
N+1
= O(x
N+1
), x 0
As a Poincare asymptotic series we thus have
_

0
e
t
(1 +xt)
1
dt

n=0
n!(x)
n
as x 0
Equivalent conditions for an asymptotic series are
R
N
(x) = o(x
N
) as x 0, and lim
x0
x
N
|R
N
(x)| = 0
8-4
Generalized Asymptotic Series
Frequently (divergent) series occur which are not power series. To allow for this there is
a generalized denition of an asymptotic series.
Denition: (Generalized Asymptotic Series) Let f(x) be a real function and {
0
,
1
, }
a sequence of functions satisfying
n+1
= o(
n
) as x 0. If there exist constants {c
n
} such
that
f(x) =
N

n=0
c
n

n
+R
N
, R
N
(x) = O(
N+1
)
then we say that

n=0
c
n

n
is an asymptotic series (or asymptotic expansion) for f(x) and
write f(x)

n=0
c
n

n
.
The Poincare asymptotic series corresponds to the choice
n
= x
n
.
Note that we can also have asymptotic series for large x, i.e.
F(x)

n=0
a
n
x
n
as |x| , if lim
|x|
x
N
_
f(x)
N

n=0
a
n
x
n
_
= 0
or equivalently
f(x)
N

n=0
a
n
x
n
= O(x
N1
) as |x| or f(x)
N

n=0
a
n
x
n
= o(x
N
) as |x|
The above denitions also make sense when x is replaced by the complex variable z.
8-5
Truncating Asymptotic Series
The sum of the rst few terms of an asymptotic series can often give an accurate numerical
estimate for a function even though such series are typically divergent. In practice the best
estimate for a function, at a particular value of x, is obtained by truncating its asymptotic
series at its smallest term, that is, by computing the partial sum S
N
(x) =
N

n=0
a
n
x
n
for xed
x with N chosen so that a
N
x
N
is the least term. The situation is shown on the left for a
series of positive terms and on the right for a series whose terms alternate in sign. The arrow
points to the least term.
f(x
0
)
a 0
n
1 3 2 4 5 6 7 8 N
S
N
(x )
0

f(x
0
)
a
n
1 3 2 4 5 6 7 8
S
N
(x )
0

= (1) bn
n
n b 0.
Alternating series have the additional nice feature that a better numerical estimate is
usually obtained by taking the arithmetic mean after truncating the series at the least (N
th
)
term:
1
2
[S
N1
(x
0
) +S
N
(x
0
)]
8-6
A Numerical Example
A numerical example is given in the table below for
f(x) =
_

0
e
t
(1 +xt)
1
dt

n=0
n! (x)
n
for the value x
0
= 0.25. The values in the last column are to be compared with the exact
value (obtained by numerical integration) of f(0.25) = 0.82533.
N N!(x
0
)
N
S
N
(x
0
)
1
2
[S
N1
(x
0
) +S
N
(x
0
)]
0 1.0000 1.0000 -
1 -0.2500 0.7500 0.8750
2 0.1250 0.8750 0.8125
3 -0.09375 0.78125 0.8281
4 0.09375 0.8750 0.8281
5 -0.11719 0.75781 0.8109
6 0.17579 0.93360 0.8402
Truncating at the least term (N = 3) gives the best estimate f(.25) 0.828. The error is
much less than the least term which is typical of divergent asymptotic series.
Convergent (power) series are always asymptotic series but not vice versa. The distinction
is that one obtains the best estimate of a function from its divergent asymptotic series by
truncating at the least term, whereas from a convergent series the function can be approxi-
mated arbitrarily closely by taking a large number of terms. One may need a huge number of
terms in a convergent series to obtain an accurate estimate. By contrast, for small (or large)
enough x the rst few terms in a divergent asymptotic series provide the best approximation.
8-7
Generating Asymptotic Series
Example: Find an asymptotic expansion for the error function
erf(x) =
2

_
x
0
e
t
2
dt
Solution: Expand the exponential and integrate term-by-term
erf(x) =
2

_
x
0

n=0
(t
2
)
n
n!
dt =
2

n=0
(1)
n
n!
_
x
0
t
2n
dt
=
2

n=0
(1)
n
x
2n+1
(2n +1)n!
Unlike the previous example the interchange of summation and integration in this case is
justied and leads to a convergent (asymptotic) series.
To develop an asymptotic series for erf(x) as x we instead use integration by parts.
First note that
lim
x
erf(x) =
2

_

0
e
t
2
dt = 1
We can then write
erf(x) =
2

__

0
e
t
2
dt
_

x
e
t
2
dt
_
= 1
2

_

x
_

1
2t
d
dt
(e
t
2
)
_
dt
= 1 +
2

__
1
2t
e
t
2
_

x
+
_

x
1
2t
2
e
t
2
dt
_
= 1
x
1

e
x
2

_

x
1
2t
3
d
dt
(e
t
2
) dt
= 1
x
1
e
x
2

+
x
3
e
x
2
2

3
2

_

x
t
4
e
t
2
dt
8-8
By successive integration by parts it can be shown by induction that
erf(x) = 1
x
1

e
x
2
_
_
_
1 +
N

n=1
(1)
n
(2n 1)!!
2
n
x
2n
_
_
_
+R
N
(x)
where
R
N
(x) =
(1)
N

(2N +1)!!
2
N
_

x
t
(2N+2)
e
t
2
dt
and the double factorial (2n +1)!! is
(2n +1)!! = (2n +1)(2n 1)(2n 3) . . . 5.3.1.
The series in curly brackets diverges. Moreover, integrating by parts again
R
N
(x) = const
_
e
x
2
x
2N+3

_

x
e
t
2
t
2N+4
dt
_
and so
|R
N
(x)| const
e
x
2
x
2N+3
+const

_

x
e
t
2
t
2N+4
dt

const
e
x
2
x
2N+3
+e
x
2

_

x
1
t
2N+4
dt

= const
e
x
2
x
2N+3
ie. |R
N
(x)| const |
N+1
| and thus the series is asymptotic.
8-9
Integration by Parts
Example: Use integration by parts to obtain an asymptotic expansion for
F(x) =
_

1
t
1
e
(1t)x
dt, x
Solution: Successive integration by parts gives
F(x) =
_

1
t
1
e
(1t)x
dt =
1
x
_

1
t
1
d
dt
_
e
(1t)x
_
dt
=
1
x

1
x
_

1
t
2
e
(1t)x
dt =
1
x
+
1
x
2
_

1
t
2
d
dt
_
e
(1t)x
_
dt
=
1
x

1
x
2
+
2!
x
2
_

1
t
3
e
(1t)x
dt =
1
x

1
x
2
+
2!
x
3

3!
x
4
+
=
1
x
N

n=0
(1)
n
n!
x
n
+R
N
(x)
1
x

n=0
(1)
n
n!
x
n
which is essentially equivalent to the series generated in the previous section for the function
f(x) =
_

0
e
u
(1 +xu)
1
du
by expanding the integrand. In fact it is easy to see that by changing variables to t = 1+xu,
f(x) =
_

1
t
1
e
(1t)/x
x
1
dt
and that by replacing x by x
1
above one reproduces the previous series.
Exercise: Show that R
N
(x) = o(x
(N+1)
) as x .
8-10
Application of Mellin Transform
Example: Let us reconsider the previous example
F(x) =
_

1
t
1
e
(1t)x
dt, x
The Mellin transform is given by
M(p) =
_

0
F(x)x
p1
dx =
_

1
t
1
__

0
x
p1
e
(1t)x
dx
_
dt
Changing variables to u = t 1 and y = (t 1)x then gives
M(p) =
_

0
u
p
(1 +u)
1
du
_

0
y
p1
e
y
dy =
(p)
sinp
, 0 < Re p < 1
where we use a previous result (integral in function reection formula) and the denition
of the function. The Mellin inversion formula then gives
F(x) =
1
2i
_
c+i
ci
x
p
(p)
sinp
dp, 0 < c < 1
Recalling now that / sinp has simple poles at p = n, n = 0, 1, 2. . . with residues (1)
n
,
closing the contour Re p = c to the right gives
F(x) =
N+1

n=1
Res(p = n) +R
N
(x) =
N+1

n=1
x
n
(n)(1)
n+1
+R
N
(x)
= x
1
_
1
1
x
+
2!
x
2

3!
x
3
+ +(1)
N
N!
x
N
_
+R
N
(x)
where the remainder is given by integrating over the semicircle of radius R = N +3/2
R
N
(x) =
1
2i
_
C
N+3/2
x
p
(p)
sinp
dp = o(x
(N+1)
), x
8-11
Watsons Lemma
A problem that frequently arises is to nd an asymptotic series for the Laplace transform
of a function. This problem is essentially solved by Watsons Lemma.
Theorem 29 (Watsons Lemma) If f(p) has the integral representation
f(p) =
_

0
x

g(x)e
px
dx
g is analytic in a neighbourhood of the origin and > 1 then
f(p)

n=0
g
(n)
(0)
n!
( +n +1)
p
+n+1
as p
Formal Proof: To prove Watsons lemma we substitute
g(x) =

n=0
g
(n)
(0)
n!
x
n
into the Laplace integral and integrate term-by-term to obtain, formally,
_

0
x

g(x)e
px
dx =

n=0
g
(n)
(0)
n!
_

0
x
+n
e
px
dx
from which the stated result follows by substituting x = t/p and noting that
_

0
x
+n
e
px
dx = p
(+n+1)
_

0
t
+n
e
t
dt = p
(+n+1)
( +n +1)
8-12
Proof of Watsons Lemma
The interchange of summation and integration in the formal proof of Watsons lemma can
not be justied. To correct the proof write
g(x) =
N

n=0
g
(n)
(0)
n!
x
n
+R
N
(x)
It then must be shown that the remainder R
N
(x) is bounded by
|R
N
(x)| c
N
x
N+1
Interchanging the integration with the nite sum is perfectly valid. So all one needs to do
is carefully estimate the integrals giving R
N
(x). Complications can arise when the series has
a nite radius of convergence r in which case the bound on the remainder may only be valid
for |x| < r. In this case one replaces the upper limit of integration by some A < r and shows
that the resulting series remains asymptotic for arbitrary A.
These are all technical details which will not concern us here. Suce it to say that in
practice one can interchange summation and integration to obtain a series which may diverge
but will (almost) certainly be asymptotic.
8-13
Watsons Lemma Example
Example: Consider
L(p) =
_

0
e
px
log(1 +x
2
) dx
in which
log(1 +x
2
) = x
2
(1
1
2
x
2
+
1
3
x
4
)
So = 2 in this example and from Watsons lemma
L(p)
2!
p
3

4!
2p
5
+
6!
3p
7
, p
which is clearly a divergent asymptotic series.
Variations on Watson lemma include integrals of the form
_

0
x

g(x)e
px
2
dx
which can be transformed into the form given in Watsons lemma by the simple change of
variables x
2
= y. Alternatively, one can substitute the series for g(x) directly.
8-14
Week 9: Asymptotics of Integrals I
25. Laplaces method
26. Stirlings formula
27. Extensions of Laplaces method
Jules Henri Poincare (18541912) Laplace (18861965)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
9
Laplaces Method
Find an asymptotic approximation to the integral
I(p) =
_
b
a
g(x) exp[p h(x)] dx, p
Here g(x) and h(x) are assumed to be integrable (and analytic). The range of integration
may be nite, semi-innite (eg. a = 0, b = ) or extend over the real line (a = , b = ).
This is a generalization of Watsons lemma, which is the special case h(x) = x.
The dominant contribution to the integral is expected to come from the neighbourhood of
the point x
0
where h(x) takes its maximum. So we subtract and add p h(x
0
) to the exponent
and note that for large p, exp(p[h(x) h(x
0
)]) is essentially zero for all x except in the
immediate neighbourhood of x
0
. An asymptotic approximation to I(p) follows by expanding
g(x) and h(x) h(x
0
) around x
0
and retaining the leading order terms.
a
x
0 b x
y
exp{p[h(x) h(x
0
)]}
p large
exp[h(x) h(x )]
0
9-1
Assuming that h(x) has a quadratic maximum at x
0
[a, b]
h

(x
0
) = 0 and h

(x
0
) = A < 0
we have the Taylor expansion
h(x) = h(x
0
) +(x x
0
)h

(x
0
) +
(x x
0
)
2
2!
h

(x
0
) +. . .
It follows that
I(p) = exp[p h(x
0
)]
_
b
a
g(x) exp(p[h(x) h(x
0
)]) dx
g(x
0
) exp[p h(x
0
)]
_
b
a
exp[pA(x x
0
)
2
/2] dx
= g(x
0
) exp[p h(x
0
)]
_
(bx
0
)

Ap
(ax
0
)

Ap
exp(y
2
/2) dy/
_
Ap
where we made the change of variables
y = (x x
0
)
_
Ap
Assume that x
0
is in the interior of the interval, (ie. a < x
0
< b), (a x
0
)

Ap and
(b x
0
)

Ap + as p . To leading order this gives


_
(bx
0
)

Ap
(ax
0
)

Ap
exp(y
2
/2) dy
_

exp(y
2
/2) dy =

2
Hence, for g(x
0
) = 0, we obtain the key result of Laplaces method
I(p)

2
p|h

(x
0
)|
g(x
0
) exp[p h(x
0
)], p
9-2
Stirlings Formula
Example: Find an asymptotic approximation to the gamma function
(p +1) =
_

0
x
p
e
x
dx, p
Solution: This integral is of the Laplace form with g(x) = e
x
and h(x) = logx but the
Laplace formula does not apply since h(x) or rather exp[h(x)] is not integrable on [0, ]. To
transform to a form where Laplace can be used we make the change of variables x = py
(p +1) = p
p+1
_

0
exp[p(logy y)] dy
This is of the required form with h(y) = logy y and g(y) = 1. Now h(y) has a single
maximum at y = 1 where
h(1) = 1, h

(1) = 0, h

(1) = 1
So Laplace gives the result
(p +1) (2/p)
1/2
p
p+1
exp(p), p
or simplifying
p! = (p +1)
_
2p
_
p
e
_
p
, p
This is known as Stirlings formula or Stirlings approximation.
9-3
Laplaces Method Special Case
Example: Find an asymptotic approximation to the integral
I(p) =
_
/4
/4
log( +sinx)e
p cos x
dx
Solution: Here h(x) = cos x has its maximum at x = 0
h(0) = 1, h

(0) = 0, h

(0) = 1
Also g(x) = log( +sinx) so g(0) = log. Applying Laplace for = 1 gives
_
/4
/4
log( +sinx)e
p cos x
dx

2
p
e
p
log, p
Laplace is not valid for g(x
0
) = 0, that is, = 1 and next-order terms are needed in
expanding g(x) around x
0
. Modifying the previous arguments gives
I(p) exp[p h(x
0
)]
_
b
a
[(x x
0
)g

(x
0
) +(x x
0
)
2
g

(x
0
)/2 +. . .] exp[pA(x x
0
)
2
/2] dx
exp[p h(x
0
)]
_

[yg

(x
0
)/
_
Ap +y
2
g

(x
0
)/2Ap +. . .] exp(y
2
/2) dy/
_
Ap

2
(p|h

(x
0
)|)
3/2
g

(x
0
) exp[p h(x
0
)], p
Here we used the results
_

y exp(ay
2
) dy = 0,
_

y
2
exp(ay
2
) dy =
1
2

a
3/2
, a =
1
2
For the above integral with = 1, since g

(x
0
) = 1, this gives
_
/4
/4
log(1 +sinx)e
p cos x
dx


2p
3
e
p
, p
9-4
Maximum at the Boundary
Numerous variations on the above theme are possible. For example if the maximum occurs
at an end-point of the interval, say at x
0
= a, the Gaussian integral contributing to Laplace
becomes
_

0
e
y
2
/2
dy =
_

2
Thus when
max
axb
h(x) = h(a)
and g(a) = 0,
I(p)


2p|h

(a)|
g(a) exp[p h(a)], p
Note that this is precisely half of the result when the maximum occurs at an internal point
x
0
= a of the interval of integration.
9-5
Equal Height Maxima
If the (absolute) maximum of h(x) occurs at more than one point in the interval, say at
x = x
1
, x
2
, we choose a point c between x
1
and x
2
and write
I(p) =
_
c
a
g(x) exp[p h(x)] dx +
_
b
c
g(x) exp[p h(x)] dx
Assuming both (equal) maxima are quadratic and applying Laplace separately gives
I(p) (2/p)
1/2
exp[p h(x
1
)]
_
g(x
1
)[|h

(x
1
)|]
1/2
+g(x
2
)[|h

(x
2
)|]
1/2
_
since h(x
1
) = h(x
2
).
a = x x x x x b = x x x
0 1 2 3 4 5 6
If there are many maxima of equal value, we just add the separate contributions. By
subdividing the interval we can assume that h(x) is monotonic in the (sub) interval and that
the maximum occurs at the end-point of the (sub) interval. In this case, the contributions
from the four intervals [x
2
, x
3
], [x
3
, x
4
], [x
4
, x
5
], [x
5
, x
6
], (with maxima at the end points) can
be combined and the nal result is given as above with x
1
and x
2
replaced with x
3
and x
5
.
The most general case is dealt with by considering monotonic functions on a nite interval
with the maximum at an end-point. Moreover, by a shift of variable the maximum can always
be taken to be at x = 0.
9-6
Non-Stationary Maximum at the Boundary
Another important case is when the maximum occurs at the boundary with h

(0) < 0 (and


h(0) maximum). In this case
h(x) h(0) = xh

(0) +. . .
Expanding around x = 0 and retaining leading order terms (assuming g(0) = 0) gives
I(p) =
_
b
0
g(x) exp[p h(x)] dx
= exp[p h(0)]
_
b
0
g(x) exp(p[h(x) h(0)]) dx
g(0) exp[p h(0)]
_
b
0
exp[p h

(0)x] dx

g(0)
p|h

(0)|
exp[p h(0)]{1 exp[p b h

(0)]}
It follows that
I(p)
g(0)
p|h

(0)|
exp[p h(0)]
since h

(0) = |h

(0)| and, for h

(0) < 0, exp[p b h

(0)] is exponentially small for large p.


Other types of maxima are possible. For example a quartic maximum at x = 0 where
h

(0) = h

(0) = h

(0) = 0, h

(0) < 0
This and similar cases are left as exercises.
9-7
Week 10: Asymptotics of Integrals II
28. Method of stationary phase
29. Method of steepest descents
30. Airys integral
George Gabriel Stokes (18191903) Lord Kelvin (William Thomson) (18241907)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
10
Method of Stationary Phase
The method of stationary phase approximates integrals of the form
I(p) =
_
b
a
g(x) exp[ip h(x)] dx, p
where g and h are real functions and x and p are real variables. Just as Laplaces method
was used study of Laplace transforms, the above integral is a natural generalization of the
Fourier transform of g(x)
G(p) =
_

g(x) exp(ipx) dx
Integrals of the above form arise naturally in the study of wave motion and the problem of
approximating such integrals for large p was rst considered by Stokes (1860) and developed
further by Kelvin (1890).
For real p and h, the term exp(iph(x)) is purely oscillatory and for large p the oscillations
occur over a narrow range of x, except in the vicinity of a stationary point where h

(x
0
) = 0.
y
0
a
x
0
b
x
y = cos[ph(x)]
y = h(x)
10-1
For large p the rapid oscillations between positive and negative values cancel so that
lim
p
_
b
a
g(x) exp[iph(x)] dx = 0
This is a generalization of the celebrated Riemann-Lebesgue lemma. For large p we thus
expect the dominant contribution to I(p) to come from neighbourhoods of points where h(x)
is stationary. Notice, however, that unlike the situation in Laplaces method, all stationary
points (not just absolute maxima) contribute to the asymptotic form for large p.
Theorem 30 (Riemann-Lebesque) Suppose that g(x) is continuous on the closed bounded
interval [a, b]. Then
c
p
=
_
b
a
g(x)e
ipx
dx 0, p
This theorem asserts that the Fourier coecients c
p
in the Fourier series

p=
c
p
e
ipx
for the periodic extension of g(x) vanish as p .
10-2
Single Stationary Point
Consider the simplest case when h(x) has a single stationary point x
0
such that a < x
0
< b,
h

(x
0
) = 0 and h

(x
0
) = 0. Expanding h(x) and g(x) around x = x
0
gives
I(p) g(x
0
) exp[ip h(x
0
)]
_
b
a
exp[ip(x x
0
)
2
h

(x
0
)/2] dx
Making the change of variables x x
0
= (2/p|h

(x
0
)|)
1/2
r
and noting that
h

(x
0
)
|h

(x
0
)|
=
_
_
_
+1, h

(x
0
) > 0
1, h

(x
0
) < 0
we obtain, by analogy with the Laplace argument
I(p)

2
p|h

(x
0
)|
g(x
0
) exp[ip h(x
0
)]
_

e
ir
2
dr, p
where the sign corresponds to h

(x
0
) > 0, h

(x
0
) < 0.
To evaluate the integral consider the integral of exp(z
2
)
around a /4 sector in the rst quadrant.
C
Re
z = re
r : R0
i
/4 i
/4
R

z = x
x : 0R
R

From Cauchys theorem


0 =
_
C
exp(z
2
) dz =
_
R
0
exp(x
2
) dx +
_
C
R
exp(z
2
) dz +
_
0
R
exp(r
2
e
i/2
)e
i/4
dr
In the limit R the integral around C
R
vanishes. So, since exp(i/2) = i,
e
i/4
_

0
exp(ir
2
) dr =
_

0
exp(x
2
) dx =

/2
10-3
Combining this with its complex conjugate gives
_

exp(ir
2
)dr =

exp(i/4)
Substitution then gives the asymptotic approximation
I(p)

2
p|h

(x
0
)|
g(x
0
) exp(i[p h(x
0
) /4])
where the sign is chosen when h

(x
0
) > 0, h

(x
0
) < 0. The two sign possibilities can be
incorporated into a single formula by noting
e
i/4
|h

(x
0
)|
1/2
= (e
i/2
|h

(x
0
)|)
1/2
=
_
_
_
{i[+h

(x
0
)]}
1/2
, h

(x
0
) > 0
{i[h

(x
0
)]}
1/2
, h

(x
0
) < 0
= [ih

(x
0
)]
1/2
This gives the result
I(p)

2
iph

(x
0
)
g(x
0
) exp[ip h(x
0
)]
which is to be compared with the corresponding result for Laplaces method, namely (since
|h

(x
0
)| = h

(x
0
) in this case)
_
b
a
g(x) exp[p h(x)] dx (

2
p h

(x
0
)
g(x
0
) exp[p h(x
0
)]
In other words the stationary phase result can be obtained formally from the Laplace result
simply by replacing p on both sides of the equation by ip.
10-4
Asymptotic Approximation of Bessel Function
Example: Use the integral representation to nd an asymptotic approximation for the Bessel
function
J
n
(p) =
1

_

0
cos(nx p sinx) dx =
1

Re I(p)
where
I(p) =
_

0
e
inx
e
ip sinx
dx
Solution: In this case g(x) = exp(inx) and h(x) = sinx has a stationary point at x
0
= /2
with
h

(/2) = 0, h

(/2) = 1
The method of stationary phase with the plus sign gives
I(p)

2
p
exp
_
ni
2
_
exp
_
i
_
p sin
_

2
_
+

4
__
and thus from the stationary phase result
J
n
(p)

2
p
cos
_
p (n +
1
2
)

2
_
, p
10-5
Plot of Bessel Function
2 4 6 8 10
0.5
1
1.5
2
2.5
Plot of the Bessel function J
0
(p) for p > 0.
Plot of the leading asymptotic behaviour

2
p
cos(p /4) as given by the method of
stationary phase.
10-6
Method of Steepest Descents
To generalize the previous methods consider the path integral
I(p) =
_

g(z) exp[p h(z)] dz


where g and h are analytic in a region containing the path . If p is complex we can write
p = |p| exp(ia) and absorb the factor exp(ia) into h(z) so we can assume p is real and positive.
Similarly, since g is the sum of its real and imaginary parts we can assume g is real. We note
that g and h have branch points, poles etc. but that can be deformed through regions of
analyticity of g and h.
Writing
h = u +iv
we see that as before contributions arising from exp(ipv) will tend to cancel out for large p.
So the leading asymptotic approximation to I(p) will be obtained by deforming the contour
in such a way as to maximize the contribution from exp(pu), as in Laplaces method, and
to minimize the eect of the oscillatory part exp(ipv). Such a path corresponds to a path of
steepest descent.
We show, since h is analytic, that u and v can never have maxima only saddle points.
Clearly u and v are conjugate harmonic functions satisfying Laplaces equation

2
h
x
2
+

2
h
y
2
= 0
It follows that if u
xx
> 0, then u
yy
< 0 and vice versa and similarly for v. So there are only
saddle points.
10-7
A
B

S
P
Q

u
x
y
Saddle point S
P Q
B
A
S

The diagrams shows a saddle point S for u = u(x, y) with contour lines corresponding to
u = const (solid) and v = const (dashed). From the Cauchy-Riemann relations
u v =
u
x
v
x
+
u
y
v
y
=
v
y
_

u
y
_
+
u
y
v
y
= 0
so that the contour lines for u and v are orthogonal. Also shown is a typical path which is
deformed to minimize the oscillatory eects of v. This is accomplished by deforming to a
contour line where v = const. There are two such lines through the saddle point S, namely
ASB and PSQ. The path ASB corresponds to the path of steepest descent. On this path it
wil be noted that u has a local maximum at S.
If we deform the contour to PSQ, the integral would diverge. Also, if we attempted to
increase the contribution from u by choosing a path other than ASB, we would be unable
to stay on a single v = const contour. In this case the resulting oscillations from the v term
would cancel any gain in choosing such a path.
10-8
To evaluate I(p) we deform to the path of steepest descent
S
=ASB and expand h(z)
about its saddle point z
0
h(z) = h(z
0
) +(z z
0
)
2
h

(z
0
)/2 +. . .
where we assume that h

(z
0
) = 0 and
h

(z
0
) = be
ia
with b > 0
Taking z
0
= x
0
+iy
0
and transforming to the local polar coordinate system
z = z
0
+re
i
this becomes
u(x, y) +iv(x, y) = u(x
0
, y
0
) +iv(x
0
, y
0
) +
1
2
br
2
exp(2i +ia) +. . .
or on equating real and imaginary parts,
u(x, y) = u(x
0
, y
0
) +
1
2
br
2
cos(2 +a) +. . .
v(x, y) = v(x
0
, y
0
) +
1
2
br
2
sin(2 +a) +. . .
Locally, the contour lines v = v
0
through z
0
correspond to sin(2 +a) = 0, that is
2 +a = 0,
10-9

=
2
+
2

(v = v
0
)
2

+
4
(u = u
0
)
= /2
(v = v
0
)
s
Clearly the path of steepest descents corresponds to =
a
2
+

2
since on this path
u(x, y) = u(x
0
, y
0
)
1
2
br
2
+. . .
An equivalent way to get the angle is to note that the coecient of r
2
must be real and
negative, thus we must have +b exp(2i + i)/2 < 0 and since it is assumed b > 0 we must
have 2i +i = and hence =

2

2
. Finally, on
S
we have
z = z
0
+re
i(a)/2
, dz = e
i(a)/2
dr
Combining the above results gives
I(p) =
_

g(z) exp(p h(z)) dz g(z


0
) exp[p h(z
0
)]
_

s
exp(pbr
2
/2)e
i(a)/2
dr
g(z
0
) exp[p h(z
0
) +i( a)/2]
_

exp(x
2
/2) dx/
_
pb

2
p|h

(z
0
)|
g(z
0
) exp[p h(z
0
) +i( a)/2], p
10-10
Here we changed variables to x = r/

pb and used b = |h

(z
0
)|. Just as in the method of
stationary phase, we can write
exp[i( a)/2]|h

(z
0
)|
1/2
= [e
i
be
ia
]
1/2
= [h

(z
0
)]
1/2
so that
_

g(z) exp[p h(z)] dz

2
p h

(z
0
)
g(z
0
) exp[p h(z
0
)]
Again this is equivalent to, or more correctly an analytic continuation of, the basic result of
applying Laplaces method to the corresponding real integral.
10-11
Airys Integral
Ai(x) =
1

cos(k
3
/3 kx)dk, convergent but not integrable
Example: Consider the ODE
d
2
f
dx
2
+xf(x) = 0
The two linearly independent solutions to this equation are the Airy functions Ai(x) and
Bi(x). Airys equation is actually f

xf = 0. Consider the Fourier transform of f


F{f(x)} = F(k) =
1

2
_

f(x)e
ikx
dx
and note that
F{f

(x)} = k
2
F(k), F{xf(x)} =
1

2
_

xf(x)e
ikx
dx = i
d
dk
F(k)
Taking Fourier transforms shows that
k
2
F(k) i
dF
dk
= 0
which can be integrated to give
F(k) = Aexp(ik
3
/3)
where A is an arbitrary constant. The inverse Fourier transform then gives a solution to the
ODE in the form of Airys integral
f(x) =
A

2
_

exp(ik
3
/3 ikx) dk
10-12
Airys Integral: x
To study the asymptotic behaviour of Airys integral as x + we make the substitution
k = ix
1/2
z
This gives
f(x) = iA(x/2)
1/2
_
i
i
exp[x
3/2
(z +z
3
/3)] dz = iA(x/2)
1/2
I(p)
where p = x
3/2
,
I(p) =
_

exp[p(z +z
3
/3)]dz
and denotes the path along the imaginary axis from i to +i. If we set
h(z) = z +z
3
/3
and note that h

(z) = 1 + z
2
= 0 when z = i we see that the path passes through two
saddle points at z = i.
In the neighbourhood of z = i we have
h(z) = h(i) +(z i)h

(i) +(z i)
2
h

(i)/2 +. . . = 2i/3 +r
2
exp(2i +i/2) +. . .
where we changed variables to
z = i +re
i
and used h

(i) = 2i = 2e
i/2
. The path of steepest descent
i
in this case occurs for
exp(2i +i/2) = 1, that is, = /4.
10-13
Similarly, in the neighbourhood of z = i,
h(z) = 2i/3 +r
2
exp(2i i/2)
where we h

(i) = 2i = 2e
i/2
and changed variables to z = i +re
i
. In this case the path
of steepest descent
i
corresponds to = 3/4.

i
(u +)
(u + )
(
2
)
3
v =
u = 0

i
(v =
2
3 / )

i
u = 0
i

(u +)
As in the diagram, the path can be deformed to the union of the two paths
i
and
i
resulting in the asymptotic approximation
_
i
i
exp[p(z +z
3
/3)] dz
_

exp[p(2i/3 r
2
)]e
i/4
dr +
_

exp[p(2i/3 r
2
)]e
3i/4
dr
= (/p)
1/2
{exp[2ip/3 +i/4] exp[2ip/3 i/4]}
= 2i

p
sin(2p/3 +/4), p
Recalling that p = x
3/2
we nd that
f(x)

2Ax
1/4
sin(2x
3/2
/3 +/4), x
10-14
Airys Integral: x
To study f(x) as x we write
f(x) =
A

2
_

exp(ik
3
/3 +ik|x|) dk
and make the substitution
k = iz
_
|x|
to obtain
f(x) = iA(|x|/2)
1/2
I(p)
where now p = |x|
3/2
,
I(p) =
_
i
i
exp[p h(z)] dz
and
h(z) = z z
3
/3

(u ) v = 0
u = 2/3
u = 2/3
(u )
(u)
(v = 0)
1

1
1
This time h(z) has two saddle points at z = 1, but now it is only possible to deform the
contour to
1
as shown in the diagram.
In the neighbourhood of z = 1
h(z) = h(1) +(z +1)
2
h

(1)/2 +. . .
with h(1) = 2/3 and h

(1) = 2. Writing z = 1 +re


i
we have
h(z) = 2/3 +r
2
exp(2i) +. . .
10-15
The path of steepest descent is now = /2 (ie. exp(2i) = 1). So
I(p) exp(2p/3)
_

e
p r
2
e
i/2
dr = i

p
exp(2p/3)
and thus
f(x) 2
1/2
A|x|
1/4
exp(2|x|
3/2
/3), x
Sometimes you can change to stationary phase ...
If for x > 0 we substitute
k = x
1/2
z
into Airys integral we nd
f(x) = A
_
x
2
_

exp[ip(z
3
/3 z)] dz
where p = x
3/2
as before. But now we can use the method of stationary phase. In this case
h(z) = z
3
/3 z
has stationary points at z = 1 where h(1) = 2/3, h

(1) = 2. Using the transform of


the integral and adding the contributions from the two stationary points we nd
f(x) A(x/2)
1/2
(/p)
1/2
{exp[i(2p/3 +/4)] +exp[i(2p/3 /4)]}
= 2
1/2
Ax
1/4
cos(2x
3/2
/3 /4) as x
in agreement with the previous result (noting that cos = sin( +/2)).
10-16
And sometimes you must use steepest descents ...
If for x < 0 we substitute
k = |x|
1/2
z
we nd
f(x) = A(|x|/2)
1/2
_

exp[ip(z
3
/3 +z)] dz
where p = |x|
3/2
. In this case the stationary points of
h(z) = z
3
/3 +z
at z = i are on the imaginary axis so that the method of stationary phase is not applicable
and the full machinery of the method of steepest descents must be called into play resulting
in the previous asymptotic form as x .
10-17
Plot of Airy Function
-5 -2.5 2.5 5 7.5 10
0.5
1
1.5
Plot of the Airy function Ai(x).
Plot of the leading asymptotic behaviour
x
1/4

sin(2x
3/2
/3 +/4) as x as given by
steepest descents.
Plot of the leading asymptotic behaviour
|x|
1/4
2

exp(2|x|
3/2
/3) as x as given by
steepest descents.
10-18
Asymptotics from Integrals
g(q) e
p h(q)
dq
a
b
h(q)
Pure
Real
Pure
Imaginary
Complex/
Contour
Laplace Method
- only highest max
counts
Stationary Phase
All stationary
points count
Steepest Descents
- Find saddle
Find direction
- Use LM
a) h has max inside [a,b]
- Expand h to quadratic
- upper and lower
limits to infinity
b) h has stationary max
at boundary.
- Expand h to quadratic
- only other limit to
infinity.
c) Non-stationary max at
boundary.
- Expand h to linear
- only other limit to
infinity.
Check
g(x
0
)
g(x
0
)0
g(x
0
)=0
Taylor/Frobenius
expand g about x
0
and
keep first non-zero
integral
No expansion:
- check for max
of whole integrand
and change variable
- repeat LM
d) Max at +/- infinity: check for max of whole integrand
and change variable; repeat LM
Replace g(x) by g(x
0
)
10-19
Week 11: ODEs and Asymptotics I
31. Method of dominant balance
32. Self adjoint form
33. Asymptotic expansion of solutions to ODEs
Friedrich Wilhelm Bessel (17841846) George Biddell Airy (18011892)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
11
Method of Dominant Balance
We have obtained the asymptotic behaviour of functions satisfying linear ODEs, such
as, Airys function and Bessel functions by using integral representations. We now obtain
the asymptotic behaviour directly from the ODE, without solving it, using the method of
dominant balance. The method assumes asymptotic behaviour of the form
y(x) e
S(x)
for some S(x). The method proceeds in steps:
1. Drop all terms that appear small (ie. little oh) and replace exact equation by an
asymptotic equation.
2. Replace the asymptotic sign with an equal sign and solve the dierential equation.
3. Check the solution is consistent with the assumptions made in step 1 (ie. all omitted
terms are little oh). If consistent then the solution gives the controlling factor (ie. the
most rapidly changing part of the dominant term).
Note, this argument is circular! But it underlies all the techniques we will study.
11-1
Example of Dominant Balance
Example: Consider Bessels equation
d
2
y
dx
2
+
1
x
dy
dx
+
_
1

2
x
2
_
y = 0, x
Solution: 1. First change variables to y = exp(S(x))
y

= yS

, y

= yS

+yS

2
The ODE becomes a ve term non-linear dierential equation for S(x)
S

+S

2
+
1
x
S

+1

2
x
2
= 0
We guess the terms S

, S

/x and
2
/x
2
are smaller then the remaining two (which should be
of the same order). This gives the asymptotic equation
S

2
+1 0
2. We replace this with the exact equation
S

0
2
= 1 S

0
= i S
0
(x) = ix
If we have dropped the correct terms, this will give us the controlling factor S
0
(x).
3. Note, the two retained terms are of the same order ie. S

0
2
1. Having found the
controlling factor we must now go back and verify that the three dropped terms are indeed
small compared to the retained terms, that is, o(S

0
2
) or equivalently o(1)
S

= o(1),
1
x
S

= o(1),

2
x
2
= o(1)
11-2
Consistency
Assume S(x) = S
0
(x), so that S

= S

0
= 0, S

/x = S

0
/x = i/x and
lim
x
|S

|
|1|
= 0
lim
x
|S

/x|
|1|
= lim
x
1/|x| = 0
lim
x
|
2
/x
2
|
|1|
= lim
x

2
/|x
2
| = 0
Since the dropped terms are smaller than the retained terms, our assumptions are consistent.
The controlling factor is only part of the dominant asymptotic solution. To nd the
additional terms we repeat the process for each of the S
0
solutions. For S
0
= ix, let S(x) =
S
0
(x) +C(x) = ix +C(x). Substituting this into the original equation for S(x) gives
0 = C

+(i +S

1
)
2
+
1
x
(i +C

) +1

2
x
2
= C

+C

2
+(2i +
1
x
)C

2
x
2
+
i
x
Once again we guess which terms are small as x . Retaining only 2iC

and i/x leaves


the asymptotic equation
2iC

+
i
x
0
which becomes an exact equation for the dominant part S
1
of C
2iS
1

+
i
x
= 0 S
1
(x) =
1
2
logx +c
Once again we must check the omitted terms are little oh of the retained terms. Clearly
|S

1
/S

1
| 0, |(S

1
/x)/S

1
| 0, |S

1
2
/S

1
| 0 and |(
2
/x
2
)/S

1
| 0. Thus we are consistent.
11-3
Iterative Asymptotic Solution
Stopping at this stage gives
y(x) e
ix
1
2
logx+c
=
A

x
e
ix
where the arbitrary constant of integration A = e
c
has been inserted. Using the second
solution S
0
= ix gives a similar result
y(x) Ae
ix
1
2
logx+c
=
A

x
e
ix
As these two solutions are of the same order, any linear combination is also of the same order
so, to obtain a real solution, we can write
y(x)
A
1

x
e
ix
+
A
2

x
e
ix
=
B

x
cos(x +), A
1
= Be
i
, A
2
= Be
i
, B, R
This result should be compared with the result from the method of stationary phase

2
x
cos
_
x (n +
1
2
)

2
_
Note, dominant balance cannot x the arbitrary constants. This is a general feature of
methods which start from ODEs.
Terminating the method at this stage gives the dominant part of an asymptotic solution to
Bessels equation. To be sure of this you would have to repeat the process again and check
that the next factor is subdominant. If you want the whole of the dominant asymptotic form
you have to keep going until the factor you calculate is o(1) because then its only contribution
to y = exp(S
0
+S
1
+S
2
+ S
n
) is a multiplicative constant.
11-4
Self Adjoint Form
The method of dominant balance can be applied to almost any ODE. However for ODEs
in self-adjoint form there are faster methods.
Consider the problem of nding asymptotic solutions of the general second order ODE
d
2
y
dx
2
+p(x)
dy
dx
+q(x) y = 0, x
The term involving the rst derivative can be eliminated by the substitution
y(x) = f(x) exp
_

1
2
_
p(x) dx
_
reducing the equation to the self-adjoint form
d
2
f
dx
2
+h(x) f(x) = 0
where, after some algebra,
h(x) = q(x)
1
2
dp
dx

1
4
p
2
(x)
We previously studied the asymptotic solution of Airys ODE which is in self-adjoint form
d
2
f
dx
2
+xf(x) = 0, x
11-5
Example of Self Adjoint Form
Example: Consider again asymptotic solution of Bessels equation
d
2
y
dx
2
+
1
x
dy
dx
+
_
1

2
x
2
_
y = 0, x
Solution: In this case the substitution
y(x) = f(x) exp
_

1
2
_
1
x
dx
_
= f(x) exp
_

1
2
logx
_
= x
1/2
f(x)
reduces the ODE to the self adjoint form
d
2
f
dx
2
+
_
1

2
1/4
x
2
_
f = 0
with
h(x) =
_
1

2
x
2
_

1
2
d
dx
_
1
x
_

1
4
_
1
x
_
2
= 1

2
1/4
x
2
If
h(x) = a
0
+a
1
/x +a
2
/x
2
+. . .
the leading term in the asymptotic solution is obtained by replacing h(x) with a
0
. The
equation is then approximated by
d
2
f
dx
2
+a
0
f = 0
which has solution
f(x) =
_

_
Acos(

a
0
x) +Bsin(

a
0
x), a
0
> 0
Aexp(
_
|a
0
| x) +Bexp(
_
|a
0
| x), a
0
< 0
11-6
In the case of Bessels equation, a
0
= 1 so the solution J

(x) = x
1/2
f(x) of the ODE has
the asymptotic form
J

(x) x
1/2
(Acos x +Bsinx)
= Cx
1/2
cos(x ), x
For = n, this is to be compared with the result obtained by the method of stationary phase
J
n
(x) (2/x)
1/2
cos
_
x (n +
1
2
)

2
_
Unfortunately, the above simple-minded approach does not x the phase
= (n +
1
2
)

2
We next show how to systematically extend this approach to obtain asymptotic expansions
of solutions of the self-adjoint form when h(x) has the assumed form.
11-7
Asymptotic Expansions of Solutions
If h(x) is of the assumed form, we try a solution of the form
f(x) = exp(x)x
p
(1 +u
1
/x +u
2
/x
2
+. . .)
By direct substitution, and collecting terms with like powers of x, we nd
0 =
d
2
f
dx
2
+(a
0
+
a
1
x
+. . .)f(x)
= exp(x) x
p
_
(
2
+a
0
) +[(
2
+a
0
)u
1
+(2p +a
1
)]x
1
+O(x
2
)
_
The equation is then satised to order x
2
when

2
+a
0
= 0, 2p +a
1
= 0
The rst equation is the indicial equation. If a
0
> 0, the solutions are
= i

a
0
corresponding to the leading order solutions. The corresponding values of p are
p = i
a
1
2

a
0
resulting in the two fundamental asymptotic solutions
f

(x) = exp(i

a
0
x) x
ia
1
/2

a
0
_
1 +O(
1
x
)
_
, a
0
> 0
11-8
Degenerate Case
If a
0
= 0 we change variables to y = x
1/2
so that, in terms of y,
0 =
d
2
f
dx
2
+
_
a
1
x
+
a
2
x
2
+
_
f =
1
4y
2
d
2
f
dy
2

1
4y
3
df
dy
+
_
a
1
y
2
+
a
2
y
4
+
_
f
That is
d
2
f
dy
2

1
y
df
dy
+4y
2
_
a
1
y
2
+
a
2
y
4
+
_
f = 0
To bring this to self adjoint form we substitute
f(y) = F(y) exp
_
1
2
_
dy
y
_
=

y F(y)
and nd
d
2
F
dy
2
+H(y) F(y) = 0, H(y) = 4a
1
+
4a
2
y
2
+ +
1
4y
2
Assuming a
1
> 0, we thus have to leading order
F(y) exp(2i

a
1
y)
and hence, recalling that y = x
1/2
,
f(x) x
1/4
exp[2i

a
1
x]
_
1 +O(x
1/2
)
_
, a
0
= 0, a
1
> 0
Finally, if a
0
= a
1
= 0, we adapt Frobenius theory to look for a solution of the form
f(x) = x
p

n=0
a
n
x
n

11-9
Week 12: ODEs and Asymptotics II
34. WKB method
35. Matching and validity of WKB solutions
36. Applications
Niels Henrik David Bohr (18851962) Arnold Johannes Wilhelm Sommerfeld (18681951)
Photographs c MacTutor Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk)
12
WKB Method
When h(x) in the self-adjoint form has positive powers of x, or more generally when h(x)
is slowly varying we can look for a solution in the form
f(x) = e
S(x)
= e
i(x)
where (x) is also assumed to be slowly varying. This is the same assumption as in dominant
balance with S = i. This method is generally attributed to Wentzel, Kramers and Brillouin
(WKB) in the 1920s who were studying quantum mechanical problems. In this context,
the Schrodinger equation for a quantum particle of mass m, say an electron, moving in one
dimension in the presence of a potential V (x) has the self-adjoint form

2
2m
d
2
y
dx
2
+
_
E V (x)
_
y = 0
where y is the wavefunction, E represents the energy of the system and is Plancks constant
divided by 2. In fact the same method was developed by Liouville (1837) and later rened
by Rayleigh (1912) in a classical setting.
To highlight the notion of slowly varying we introduce a small parameter > 0 by assuming
that h(x) > 0 (in some interval) and is of the form
h(x) = [k(x)/]
2
, k(x) > 0
so that
d
2
f
dx
2
+
_
k(x)

_
2
f(x) = 0
12-1
We now substitute
f(x) = exp[i(x)/]
into the equation to obtain the nonlinear ODE
i
d
2

dx
2

_
d
dx
_
2
+k(x)
2
= 0
Roughly, slowly varying means

is small (compared with

) so that, to obtain an approxi-


mate solution, we formally expand (x) in powers of , substitute in and equate coecients
of like powers of to zero. Thus if we substitute
(x) =
0
(x) +
1
(x) +
2

2
(x) +. . .
into the nonlinear ODE and collect powers of we nd
k(x)
2
(

0
)
2
+[i

1
] +O(
2
) = 0
We deduce that
k(x)
2
(

0
)
2
= 0, i

0
2

1
= 0
and so forth. These equations can be solved immediately

0
(x) =
_
k(x) dx,
d
1
dx
=
i
2

0
=
i
2
d
dx
_
log|

0
|
_
giving

1
(x) =
i
2
logk(x)
12-2
To order we conclude that
f(x) exp
_

_
k(x) dx log([k(x)]
1/2
)
_
or
f(x) A[h(x)]
1/4
exp
_
i
_
[h(x)]
1/2
dx
_
, h(x) > 0
This asymptotic form applies in a region where h(x) > 0 and is slowly varying. When h(x) < 0
we set h(x) = [k(x)/]
2
and proceed exactly as above to obtain
f(x) A|h(x)|
1/4
exp
_

_
|h(x)|
1/2
dx
_
, h(x) < 0
12-3
Matching of WKB Solutions
In many examples it turns out that h(x) > 0 as x and h(x) < 0 as x . Consider
again Airys ODE
d
2
f
dx
2
+xf(x) = 0
where h(x) = x is positive for x > 0 and negative for x < 0. As x we might then expect
(assuming we require the solution which vanishes at innity) that
f(x) A|x|
1/4
exp
_

_
|x|
1/2
dx
_
= A|x|
1/4
exp
_

2
3
|x|
3/2
_
, x
This agrees with the method of steepest descents. Similarly, for x > 0, we expect
f(x) cx
1/4
sin(2x
3/2
/3 +), x
which also agrees with steepest descents. Steepest descents xed the phase = /4 whereas
is not determined by WKB. In general, however, when h(x) has a turning point where
h(x
0
) = 0 and is negative (positive) for x < x
0
(x > x
0
) careful matching arguments show
that is always /4 in the oscillatory (x > x
0
) region. In other words the WKB solution
A|h(x)|
1/4
exp
_

_
x
0
x
_
|h(x)| dx
_
, x < x
0
matches, or connects, to the solution
A[h(x)]
1/4
sin
_
_
x
x
0
_
h(x) dx +/4
_
, x > x
0
12-4
Bohr-Sommerfeld Quantization
This matching result has an interesting consequence in a quantum mechanical setting with
h(x) =
2m

2
_
E V (x)
_
and V (x) of the form
E
a b
x
V(x)
In this case there are two points a and b such that h(x) is negative for x < a and x > b
and positive for a < x < b. Matching with the exponentially decreasing solution for x < a
(respectively x > b) shows that the solution is approximated by
y(x) A[E V (x)]
1/4
sin
_
_
x
a
_
2m

2
(E V (x))
_
1/2
dx +

4
_
, a < x < b
y(x) A[E V (x)]
1/4
sin
_

_
b
x
_
2m

2
(E V (x))
_
1/2
dx

4
_
, a < x < b
These two expressions must be identical (up to constants). So, since sin(x + n) = sinx,
it follows that
1

_
b
a
_
2m(E V (x)) dx =
_
n +
1
2
_
, n = 0, 1, 2, 3, . . .
which is the Bohr-Sommerfeld quantization condition in early quantum theory.
12-5
Errors in WKB Approximation
To estimate errors in the WKB approximation, observe that for h(x) > 0 the WKB functions
W

(x) = [h(x)]
1/4
exp
_
i
_
[h(x)]
1/2
dx
_
satisfy the self-adjoint equation
d
2
W

dx
2
+{h(x) +g(x)}W

(x) = 0
where
g(x) =
1
4
(h

/h)
5
16
(h

/h)
2
which is to be compared with the original equation (with g(x) = 0). It follows that
y(x) W

(x)
in regions where

1
4
(h

/h)
5
16
(h

/h)
2

|h(x)|
For example in the case of Airys equation,
W

(x) = x
1/4
exp(2ix
3/2
/3)
satisfy
W

+
_
x
5
16x
2
_
W

(x) = 0
and
|g(x)| =
5
16x
2
|x|, x large
12-6
Quantum Harmonic Oscillator
Consider a harmonic potential V (x) =
1
2
m
2
x
2
and set E =
1
2
E. Then in dimensionless
variables (m = = = 1), Schrodingers equation becomes
d
2

dx
2
+(E x
2
) = 0, |(x)|
2
= {Probability of nding particle at x}
For x , the (decaying) asymptotic WKB functions with h(x) = E x
2
x
2
are

A

_
|x|
e

1
2
x
2
, x
Now EV =
1
2
(Ex
2
) = 0 at x =

E. So, with x =

Eu, the Bohr-Sommerfeld quantization


becomes
1

E
_
E x
2
dx =
E

_
1
1
_
1 u
2
du = E = 2n +1, n = 0, 1, 2, . . .
Substituting = y(x)e

1
2
x
2
into the ODE we nd
y

2xy

+(E 1)y = 0
We require solutions satisfying y = o(e
1
2
x
2
) as x . Such solutions only occur if E1 = 2n
and then the solutions are given by the Hermite polynomials of order n
y(x) = H
n
(x), n = 0, 1, 2, . . .
The rst few Hermite polynomials are H
0
(x) = 1, H
1
(x) = 2x, H
2
(x) = 4x
2
2,
H
3
(x) = 8x
3
12x, H
4
(x) = 16x
4
48x
2
+12.
12-7
Harmonic Oscillator Wavefunctions
-3 -2 -1 1 2 3
2
4
6
8
-2 -1.5 -1 -0.5 0.5 1 1.5 2
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Plot of |(x)|
2
for E = 1, 3, 5, 7 inside the quadratic potential well.
Plot of |(x)|
2
for E = 7. The x-axis is in units of

E.
Plot of the squares |W(x)|
2
of the asymptotic WKB functions for E = 7. The oscillating
WKB function is
W(x) =
A
|E x
2
|
1/4
sin
__
x

E
_
E u
2
du +

4
_
12-8
Higher Order Laplace Method
Example: Find a higher order asymptotic approximation to the gamma function
(p) =
_

0
x
p1
e
x
dx, p
Solution: Changing variables to x = py brings the integral to Laplace form
(p) = p
p
_

0
1
y
exp[p(logy y)] dy
The exponent h(y) = logy y has a single maximum at y = 1. Taylor expanding gives
h(y) = 1
1
2
(y 1)
2
+
1
3
(y 1)
3

1
4
(y 1)
4
+
g(y) =
1
y
= 1 (y 1) +(y 1)
2
+
exp
_
p
3
(y 1)
3

p
4
(y 1)
4
+
_
= 1 +
p
3
(y 1)
3

p
4
(y 1)
4
+
p
2
18
(y 1)
6
+
Dropping odd terms and setting u =
_
p/2(y 1) gives
_

0
1
y
e
h(y)
dy e
p
_

0
_
1 +(y 1)
2

p
3
(y 1)
4

p
4
(y 1)
4
+
p
2
18
(y 1)
6
+
_
e

1
2
p(y1)
2
dy

2
p
e
p
_

_
1 +
2
p
u
2

7
3p
u
4
+
4
9p
u
6
+
_
e
u
2
du

2
p
e
p
_
1 +
_
1
7
3
3
4
+
4
9
15
8
_
1
p
+
_

2
p
e
p
_
1 +
1
12p
+
_
, p
So the improved Stirling formula is
(p 1)! = (p)

2
p
_
p
e
_
p
_
1 +
1
12p
+
_
, p
12-9

You might also like