You are on page 1of 5

Eigenvalues in Applications

Abstract
We look at the role of eigenvalues and eigenvectors in various appli-
cations. Specifically, we consider differential equations, Markov chains,
population growth, and consumption.

1 Differential equations
We consider linear differential equations of the form
du
= Au. (1)
dt
The complex matrix A is n n and

u1 (t)
u2 (t)
u = u(t) = . . (2)

..
un (t)

The equations are linear, since if v and w are two solutions, then so is
u = v + w, as shown by
du d
= (v + w)
dt dt
dv dw
= +
dt dt
= Av + Aw
= A(v + w)
= Au.

1.1 Scalar case (n = 1)


In the scalar case, i.e., n = 1, the differential equation (1) reduces to the scalar
differential equation
du
= u, (3)
dt
where C. Its general solution is u = et c where c is an arbitrary constant.

1
By specifying an initial condition u(0) = u0 we can determine c from

u0 = u(0) = e0 c = c.

Thus, the general solution given an initial condition is

u(t) = et u0 . (4)

The real part of determines growth or decay according to


Re() < 0: exponential decay,
Re() > 0: exponential growth,
Re() = 0: neither growth nor decay.
A nonzero imaginary part adds oscillation in the form of a sine wave.

1.2 General case (n > 1)


In the general case, i.e., n > 1, we can solve (1) similar to the scalar case if the
matrix A is diagonalizable. Using the spectral decomposition A = SS 1 and
multiplying both sides of (1) from the left by S 1 , we get
d 1
(S u) = (S 1 u). (5)
dt
After changing variables from u to y = S 1 u we have
dy
= y, (6)
dt
which is nothing more than the set
dy1
= 1 y1 ,
dt
dy2
= 1 y2 ,
dt (7)
.. ..
.=.
dyn
= 1 yn ,
dt
of n decoupled scalar differential equations. The solutions to (7) are yi (t) =
ei t ci for i = 1, 2, . . . , n. The parameters ci are some arbitrary constants. Using
linear algebra notation, we have the solutions
1 t
e
e2 t
y(t) = c. (8)

. ..
en t

2
The constants c are determined by the n initial conditions u(0) = u0 to the
original differential equation (1) by

u(0) = Sy(0) = Sc = u0 c = S 1 u0 . (9)

We put the pieces back together and find the general solution
1 t
e
e2 t
1
u(t) = Sy(t) = S S u0 (10)

. .
.
en t

to the differential equation (1) with initial conditions u0 .

1.3 Matrix exponential


We would like to express the general solution of (1) as

u(t) = eAt u0 . (11)

But how do we generalize the exponential function to matrices?


Recall that the scalar exponential function ex is defined by the infinite series

ex = 1 + x + (1/2)x2 + (1/6)x3 + . (12)

Let us define the matrix exponential eA by simply replacing x with A in (12),


as in
eA = I + A + (1/2)A2 + (1/6)A3 + . (13)
The series (13) is defined for square matrices and it always converges.
But is
u(t) = eAt u0 (14)
actually a solution of (1) if we use the definition (13)? The answer is yes since
the derivative of eAt u0 with respect to t is
d At
e u0 = (A + A2 t + (1/2)A3 t2 + )u0 = AeAt u0 .
dt
If A has the spectral decomposition A = SS 1 , then from

eAt = Set S 1

we see immediate connections with the previous section. Note that the matrix
exponential solves the differential equation even when A is not diagonalizable.

3
2 Markov
A Markov chain is a random process that can be in one of a finite number of
states at any given time and the next state depends only on the current state.
We are given n2 transition probabilities pij [0, 1]. The number pij gives the
probability that the next state is state i if the current state is state j. By putting
the transition probabilities in a square matrix A such that aij = pij we obtain
a Markov matrix.
Definition 1 (Markov matrix). A square matrix A is a Markov matrix if it
satisfies the following two properties. First, every entry in A is nonnegative.
Second, every column of A adds to 1.
Two facts about Markov matrices follow directly from the definition. First,
multiplying a Markov matrix A with a nonnegative vector u0 produces a non-
negative vector u1 = Au0 . Second, if the components of u0 add to 1, then so
does the components of u1 = Au0 . The first fact is trivial, and the second fact
T
can be shown as follows. Let e = 1 1 be a vector of all 1s. Then

eT u1 = eT (Au0 ) = (eT A)u0 = eT u0 = 1.

After k steps in a Markov chain, the initial probability distribution u0


changes to uk = Ak u0 . For many Markov matrices, the limit of uk as k
exists and is unqiue. We say that the Markov chain approaches the steady state
u . The existence of a steady state also shows that 1 is an eigenvalue of A and
the corresponding eigenvector is a steady state.
A famous theorem due to Perron and Frobenius shows that a Markov matrix
with strictly positive entries has 1 as its largest eigenvalue. That eigenvalue is
also simple (multiplicity equal to 1) and the corresponding eigenvector can be
scaled so that it has positive entries.
Let us show that 1 is indeed an eigenvalue of any Markov matrix. The rows
of A I sum to 1 1 = 0, which means that A I is singular. Hence, = 1 is
an eigenvalue of A.
Suppose that A is diagonalizable, i.e., A = SS 1 . We have

uk = Ak u0 = Sk S 1 u0 . (15)

Thus, if 1 = 1 and |i | < 1 for all i 6= 1, then uk approaches a steady state in


the direction of the dominant eigenvector s1 .

3 Population
A Leslie model models the long-term age distribution and growth rate of a
population. It is popular in, e.g., ecology, and it works as follows. Partition
the population into n disjoint age groups. The population in the age groups
are represented as a vector p with n components. After one time step, the
population in each age group is given by Ap, where A is an n n Leslie matrix.

4
The long-term growth rate and age distribution come from the largest eigenvalue
and its corresponding eigenvector.
The Leslie matrix is constructed as follows. The members of the youngest
age group is a product only of reproductive activity. Formally, we write
n
(k+1) (k)
X
p1 = fi pi , (16)
i=1

where the coefficients fi reflect the rate of reproduction in age group i. If fi = 0,


then no reproduction occurs in that age group and if fi = 2, then each individual
in the age group produces two (unique) offsprings.
In this model, all the members of the oldest age group die off in one time
step. The members of the other age groups have a chance of surviving and
thereby transition to the next age group. Formally, we write this as
(k+1) (k)
pi+1 = si pi , (17)
where the coefficients si reflect the chance of surviving from age group i and
transition to age group i + 1.
For example, consider n = 4 and combine the formulas (16) and (17) to
obtain, in matrix form, the Leslie model

f1 f2 f3 f4
s1 0 0 0 (k)
p(k+1) = Ap(k) = 0 s2 0 0 p .

0 0 s3 0

4 Consumption
Let A be a consumption matrix, p the production levels, and y the demand.
The production levels are given by
p = (I A)1 y. (18)
The question is: When does (I A)1 exist and when is it a nonnegative
matrix? Intuitively, if A is small then (I A)1 exists and the economy can
meet any demand, but if A is large then the production consumes too much
of the products and as a consequence the economy can not meet the demand.
What we consider small and large depends entirely on the eigenvalues of A.
We show that when the series B = I +A+A2 + converges, then B(I A) =
I and thus B = (I A)1 . Suppose that A is diagonalizable and write B as
B = S(I + + 2 + )S 1 . (19)
The infinite series within the parenthesis is nothing but n independent scalar
geometric series. The n scalar series converge if and only if |i | < 1.
The matrix B is obviously nonnegative since it is the sum of nonnegative ma-
trices. The inverse of (I A) exists and is nonnegative when all the eigenvalues
satisfy |i | < 1.

You might also like