You are on page 1of 4

Week 5

3.3 Diagonalization & Eigenvalues


Determinants play a crucial role in describing the behaviour of a system that changes with time. It
becomes necessary in such analysis to compute large powers of a square matrix: A, A2 , A3 , . . .. This
is time-consuming
matrix D.
unless we have adiagonal

d1 0 0 0
d1 0 0 0
d21 0 0 0
0 d2 0 0 0 d2 0 0 0 d2 0 0
2

Recall: D2 = ..
..
= ..
.
.
.
.

2
0 0 0 dn
0 0 0 dn
0 0 0 dn
Our goal will be to express a square matrix A in terms of a diagonal matrix because of the simplicity
of computing powers of diagonal matrices.
Definition:
Let A be an n n matrix. The scalar is called an eigenvalue of A if there is a non-zero vector
~x Rn such that
A~x = ~x.
The vector ~x 6= ~0 is called an eigenvector of A corresponding to .
Example:
 



   
 
1
1
3 0
3 0
1
3
The vector ~x =
is an eigenvector of A =
since A~x =
=
=3
, and
2
8 1
8 1 2
6
2
= 3 is an eigenvalue of A.
It turns out it will be easier for us to first find the eigenvalues of a matrix and then find the
corresponding eigenvectors. To find the eigenvalue of an n n matrix A, we rewrite the equation
A~x = ~x as
A~x = I~x
A~x I~x = ~0
(A I)~x = ~0
We are interested in the non-trivial solutions to this homogeneous matrix equation. These will be the
non-zero eigenvectors. We know that a homogeneous system A~x = ~0 has non-trivial solutions when
det(A) = 0. Therefore, (A I)~x = ~0 has non-trivial solutions when det(A I) = 0. This tells
us that the eigenvalues of A must satisfy this characteristic equation cA () = det(A I) = 0.
Once we find the eigenvalues from the characteristic equation, we substitute them back into the
original equation (A I)~x = ~0 and find the non-trivial solutions ~x to this homogeneous system.

Example:

0 0 2
Find the eigenvalues and eigenvectors of A = 1 2 1 .
1 0 3

Definition:
The eigenvalue 0 has multiplicity m if 0 occurs m times as a factor of the characteristic
equation.
For example, if c (A) = ( 5)2 ( + 1)2 , then the eigenvalue 5 has multiplicity 3 and the eigenvalue
-1 has multiplicity 2.
Theorem 1:
If the multiplicity of an eigenvalue of A is equal to the number of basis eigenvectors corresponding
to , then A is diagonalizable.
Definition:
A square matrix A is called diagonalizable if there exists an invertible matrix P such that P 1 AP
is a diagonal matrix D. We say P is a diagonalizing matrix for A and we can write P 1 AP = D.
To find P :
Place the basic eigenvectors as columns of P .
To find D:
Place the eigenvalues on the main diagonal of D respecting the order you placed the basic eigenvectors
in P .
Why?
Recall the eigenvalues and corresponding eigenvectors ~x satisfy the equation
A~x = ~x
and if A is diagonalizable, the matrices P , D and A satisfy the equation
P 1 AP = D
We want to see what the connection is between eigenvectors and P , and eigenvalues and D.
We will rewrite this last equation as
AP = P D
Let P have columns p~1 , p~2 , . . . , p~n and let the main diagonal of D be {d1 , d2 , . . . , dn }.
Then


AP = A p~1 p~2 p~n


= Ap~1 Ap~2 Ap~n

and

d1 0 0



0 d2 0
P D = p~1 p~2 p~n ..

0 0 dn


= d1 p~1 d2 p~2 dn p~n
If A is diagonalizable, then AP = P D so we have

 

Ap~1 Ap~2 Ap~n = d1 p~1 d2 p~2 dn p~n
Equating columns we have
Ap~1 = d1 p~1
Ap~2 = d2 p~2
..
.
Ap~n = dn p~n
This tells us p~i is an eigenvector of A corresponding to di , for 1 i n.
In other words, the columns of P are eigenvectors of A and the corresponding eigenvalues are the
main diagonal entries of D in the same column.
Example:

1 3 9
5 18 diagonalizable? If so, find the diagonalizing matrix P and diagonal matrix
Is A = 0
0 2 7
D.

You might also like