Professional Documents
Culture Documents
Lecture 15
More about Eigenvalues and Eigenvectors
1 1 1
A= 0 2 1
1 1 3
The characteristic equation is
1
1
1
2
1
0 = 0
1
1 3
2
1
= (1 )
1 3
= (1 )(2 )(3 )
1
1
+ 1
2 1
0 1 1
a1
0
0 1 1 a2 = 0
1 1 2
a3
0
This implies
a2 + a3 = 0
a1 a2 + 2a3 = 0
3
v1 = 1
1
1 1 1
b1
0
0
0 1 b2 = 0
1 1 1
b3
0
This implies
b3 = 0
b1 b2 = 0
1
v2 = 1
0
2 1 1
c1
0
0 1 1 c2 = 0
c3
0
1 1 0
This implies
c2 = c3
c1 = c2
1
v2 = 1
1
3 1 1
P= 1 1 1
1 0 1
then we can diagonalize the original matrix as
1 0 0
P1 AP = 0 2 0
0 0 3
Repeated Eigenvalues
Consider the matrix
1 4 2
A= 0 2 0
1 5 4
0
1
5
4
=0
2 4 2
a1
0
0 1 0 a2 = 0
1
5
1
a3
0
( 3)( 2)2 = 0
1
v3 = 0
1
1 4 2
0 0 0
1 5 2
given by
a1
0
a2 = 0
a3
0
2
v2 = 0
1
We are still short a vector for the change of basis matrix. The solution is
to introduce a generalised eigenvector or power eigenvector as a solution
to the equation
(A 2I)2 w = 0
One obvious solution is v2 , another solution is given by
(A 2I)w = v2
1 4 2
b1
2
0 0 0 b2 = 0
1 5 2
b3
1
This gives the equations
b1 + 4b2 2b3 = 2
b1 + 5b1 + 2b3 = 1
0
1
2
0
0
1/9
w = 1/9
P= 0
7/9
1 1 7/9
P doesnt quite diagonalize the original matrix
3 0 0
P1 AP = 0 2 1
0 0 2
Another example
Consider the matrix
2
0 0
A = 2 0 2
3 3 5
1 0 0
a1
0
2 3 2 a2 = 0
3 3 2
a3
0
0
v3 = 2
3
0
0 0
v1
0
2 2 2 v2 = 0
3 3 3
v3
0
This is only one equation
v1 + v2 v3 = 0
So there are two free variables and we can pick
vectors to span the subspace
1
v2 = 0
v2? =
1
0
1
1
1 0 0
1
0
0
3 2
P= 0 1 2
P1 = 2
1 1 3
1 1 1
and
2 0 0
P1 AP = 0 2 0
0 0 3
v2
vn ]
Now
AP = [Av1
Av2
Avn ] = [1 v1
2 v2
n vn ]
D=
1 0
0 2
..
.. . .
.
.
.
0 0
0
0
..
.
But
PD = [1 v1
2 v2
n vn ]
and so
PD = AP
or
D = P1 AP
and our original matrix has been diagonalized.
But the eigenvalues are distinct and the vectors v1 , v2 , , vk are linearly
independent
1 = 2 = = k = 0
k+1 = 0
The cases of k = 1 and k = 2 are true and so by induction all the vectors
have to be linearly independent.
The space spanned by the eigenvectors is called the eigenspace so another
way of stating the previous theorem about diagonalization is that a matrix
is diagonalizable if the eigenspace of the matrix is the whole space.
Functions of matrices
If a matrix A is diagonalizable then if can be expressed in the form
A = PDP1
for a change of basis matrix P and a diagonal
easy to compute
m
1
0
0 m
2
Dm = .
..
..
..
.
.
0
0
To calculate Am
Am = (PDP1 )m = (PDP1 )(PDP1 ) (PDP1 ) = PDm P1
f (1 )
0
0
f (2 )
f (A) = P .
..
..
.
0
..
.
0
0
..
.
f (n )
1
P
For example
A2
+
eA = I + A +
2!
e 1 0
0 e 2
= P .
..
..
..
.
.
0
0
0
0
..
.
e n
1
P
An Example
Find B1/2 if
B=
6 5
3 4
=0
= 1, 9
For = 1
5 5
3 3
a1
a2
b1
b2
0
0
0
0
v1 =
1
1
For = 9
3 5
3 5
v9 =
5
3
The diagonal matrix, the change of basis matrix and its inverse are
1 3 5
1 0
1 5
1
D=
P=
P =
0 9
1 3
8 1 1
Since
D = P1 BP
B = PDP1
then
1/2 1
B1/2 = PD
P
1 5
1 0 1 3 5
=
1 3
0 3 8 1 1
1 9 5
=
4 3 7
1 1 0
0 1 1
P1 AP = 0 0 1
..
..
.. . .
.
.
.
.
0 0 0
with the eigenvalues on the diagonal and ones on the diagonal above the
main diagonal in the submatrix containing the repeated root(s). Let us
write it in the form
A = P(D + N)P1
0 1 0
0 0 1
N= 0 0 0
.. .. ..
. . .
0
0
0
N2 = 0
.
..
. ..
0 0 0 0
Am = P(D + N)m P1
m
m
m
m1
= P(D +
D
N+
Dm2 N2 + )P1
1
2
= P(Dm + mDm1 N)P1
m
1 mm1
0
1
m1
m
0
1
m1
m
0
0
= P
1
.
.
.
..
..
..
..
.
0
0
0
0
0
0
..
.
m
n
1
P
An example
Find A50 if
A=
3
1
4 1
=1
v1 =
1
2
To get another vector for our change of basis matrix we need to solve
2
1
v1
1
1
?
=
v1 =
4 2
v2
2
1
then
P=
A50 =
1
1
2 1
=
1 1
2
1
50
1
1
2 1
1 1
0 1
1
1
2 1
101
50
200 99
1 50
0 1
=
=
1 1
2
1
1 1
2
1