You are on page 1of 7

16 Eigenvalues and eigenvectors

 Definition: If a vector x 6= 0 satisfies the equation Ax = x, for some real or complex


number , then is said to be an eigenvalue of the matrix A, and x is said to be an
eigenvector of A corresponding to the eigenvalue .

Example: If    
2 3 1
A= , and x = ,
3 2 1
then  
5
Ax = = 5x.
5
So = 5 is an eigenvalue of A, and x an eigenvector corresponding to this eigenvalue.

Remark: Note that the definition of eigenvector requires that v 6= 0. The reason for this is
that if v = 0 were allowed, then any number would be an eigenvalue since the statement
A0 = 0 holds for any . On the other hand, we can have = 0, and v 6= 0. See the
exercise below.

Those of you familiar with some basic chemistry have already encountered eigenvalues and
eigenvectors in your study of the hydrogen atom. The electron in this atom can lie in any one
of a countable infinity of orbits, each of which is labelled by a different value of the energy
of the electron. These quantum numbers (the possible values of the energy) are in fact
the eigenvalues of the Hamiltonian (a differential operator involving the Laplacian ). The
allowed values of the energy are those numbers such that H = , where the eigenvector
is the wave function of the electron in this orbit. (This is the correct description of the
hydrogen atom as of about 1925; things have become a bit more sophisticated since then,
but its still a good picture.)

Exercises:

1. Show that  
1
1
is also an eigenvector of the matrix A above. Whats the eigenvalue?

2. Show that  
1
v=
1
is an eigenvector of the matrix  
1 1
.
3 3
What is the eigenvalue?

1
3. Eigenvectors are not unique. Show that if v is an eigenvector for A, then so is cv, for
any real number c 6= 0.

 Definition: Suppose is an eigenvalue of A.

E = {v Rn such that Av = v}

is called the eigenspace of A corresponding to the eigenvalue .

Exercise: Show that E is a subspace of Rn . (N.b: the definition of E does not require
v 6= 0. E consists of all the eigenvectors plus the zero vector; otherwise, it wouldnt be a
subspace.) What is E0 ?

Example: The matrix


   
0 1 cos(/2) sin(/2)
A= =
1 0 sin(/2) cos(/2)

represents a counterclockwise rotation through the angle /2. Apart from 0, there is no
vector which is mapped by A to a multiple of itself. So not every matrix has real eigenvectors.
Exercise: What are the eigenvalues of this matrix?

16.1 Computations with eigenvalues and eigenvectors

How do we find the eigenvalues and eigenvectors of a matrix A?

Suppose v 6= 0 is an eigenvector. Then for some R, Av = v. Then

Av v = 0, or, equivalently
(A I)v = 0.

So v is a nontrivial solution to the homogeneous system of equations determined by the


square matrix A I. This can only happen if det(A I) = 0. On the other hand, if is a
real number such that det(A I) = 0, this means exactly that theres a nontrivial solution
v to (A I)v = 0. So is an eigenvalue, and v 6= 0 is an eigenvector. Summarizing, we
have the

Theorem: is an eigenvalue of A if and only if det(A I) = 0. . If is real, then theres


an eigenvector corresponding to .

How do we find the eigenvalues? For a 2 2 matrix


 
a b
A= ,
c d

2
we compute
 
a b
det(A I) = det = 2 (a + d) + (ad bc).
c d

 Definition: The polynomial pA () = det(AI) is called the characteristic polynomial


of the matrix A and is denoted by pA (). The eigenvalues of A are just the roots of the
characteristic polynomial. The equation for the roots, pA () = 0, is called the characteristic
equation of the matrix A.

Example: If  
1 3
A= .
3 1
Then  
1 3
A I = , and pA () = (1 )2 9 = 2 2 8.
3 1

This factors as pA () = ( 4)( + 2), so there are two eigenvalues: 1 = 4, and 2 = 2.

We should be able to find an eigenvector for each of these eigenvalues. To do so, we must
find a nontrivial solution to the corresponding homogeneous equation (A I)v = 0. For
1 = 4, we have the homogeneous system
      
14 3 3 3 v1 0
v= = .
3 14 3 3 v2 0

This leads to the two equations 3v1 + 3v2 = 0, and 3v1 3v2 = 0. Notice that the first
equation is a multiple of the second, so theres really only one equation to solve.

Exercise: What property of the matrix A I guarantees that one of these equations will
be a multiple of the other?

The general solution to the homogeneous system 3v1 3v2 = 0 consists of all vectors v such
that    
v1 1
v= =c , where c is arbitrary.
v2 1
Notice that, as long as c 6= 0, this is an eigenvector. The set of all eigenvectors is a line with
the origin missing. The one-dimensional subspace of R2 obtained by allowing c = 0 as well
is what we called E4 in the last section.

We get an eigenvector by choosing any nonzero element of E4 . Taking c = 1 gives the


eigenvector  
1
v1 =
1

3
Exercises:

1. Find the subspace E2 and show that


 
1
v2 =
1

is an eigenvector corresponding to 2 = 2.

2. Find the eigenvalues and corresponding eigenvectors of the matrix


 
1 2
A= .
3 0

3. Same question for the matrix  


1 1
A= .
0 1

16.2 Some observations

What are the possibilities for the characteristic polynomial pA ? For a 2 2 matrix A, its a
polynomial of degree 2, so there are 3 cases:

1. The two roots are real and distinct: 1 6= 2 , 1 , 2 R. We just worked out an
example of this.

2. The roots are complex conjugates of one another: 1 = a + ib, 2 = a ib.


Example:  
2 3
A= .
3 2
Here, pA () = 2 4 + 13 = 0 has the two roots = 2 3i. Now theres certainly
no real vector v with the property that Av = (2 + 3i)v, so there are no eigenvectors
in the usual sense. But there are complex eigenvectors corresponding to the complex
eigenvalues. For example, if  
0 1
A= ,
1 0
pA () = 2 + 1 has the complex eigenvalues = i. You can easily check that
Av = iv, where  
i
v= .
1
We wont worry about complex eigenvectors in this course.

4
3. pA () has a repeated root. An example is
 
1 0
A= = I2 .
0 1

Here pA () = (1 )2 and = 1 is the only eigenvalue. The matrix A I is the


zero matrix. So there are no restrictions on the components of the eigenvectors. Any
nonzero vector in R2 is an eigenvector corresponding to this eigenvalue.
But for  
1 1
A= ,
0 1
as you saw in the exercise above, we also have pA () = (1 )2 . In this case, though,
there is just a one-dimensional eigenspace.

16.3 Diagonalizable matrices

Example: In the preceding lecture, we showed that, for the matrix


 
1 3
A= ,
3 1
if we change the basis using
 
1 1
E = (e1 |e2 ) = ,
1 1
then, in this new basis, we have
 
1 4 0
Ae = E AE = ,
0 2
which is diagonal.

 Definition: Let A be n n. We say that A is diagonalizable if there exists a basis


{e1 , . . . , en } of Rn , with corresponding change of basis matrix E = (e1 | |en ) such that

Ae = E 1 AE

is diagonal.

In the example, our matrix E has the form E = (e1 |e2 ), where the two columns are two
eigenvectors of A corresponding to the eigenvalues = 4, and = 2. In fact, this is the
general recipe:

Theorem: The matrix A is diagonalizable there is a basis for Rn consisting of eigen-


vectors of A.

5
Proof: Suppose {e1 , . . . , en } is a basis for Rn with the property that Aej = j ej , 1 j n.
Form the matrix E = (e1 |e2 | |en ). We have

AE = (Ae1 |Ae2 | |Aen )


= (1 e1 |2 e2 | |n en )
= ED,

where D = Diag(1 , 2 , . . . , n ). Evidently, Ae = D and A is diagonalizable. Conversely, if


A is diagonalizable, then the columns of the matrix which diagonalizes A are the required
basis of eigenvectors.

 Definition: To diagonalize a matrix A means to find a matrix E such that E 1 AE is


diagonal.

So, in R2 , a matrix A can be diagonalized we can find two linearly independent


eigenvectors.

Examples:

Diagonalize the matrix  


1 2
A= .
3 0

Solution: From the previous exercise set, we have 1 = 3, 2 = 2 with corresponding


eigenvectors    
1 2
v1 = , v2 = .
1 3
We form the matrix
   
1 2 1 3 2
E = (v1 |v2 ) = , with E = (1/5) ,
1 3 1 1

and check that E 1 AE = Diag(3, 2). Of course, we dont really need to check: the
result is guaranteed by the theorem above!

The matrix  
1 1
A=
0 1
has only the one-dimensional eigenspace spanned by the eigenvector
 
1
.
0

There is no basis of R2 consisting of eigenvectors of A, so this matrix cannot be diago-


naized.

6
Theorem: If 1 and 2 are distinct eigenvalues of A, with corresponding eigenvectors v1 , v2 ,
then {v1 , v2 } are linearly independent.

Proof: Suppose c1 v1 + c2 v2 = 0, where one of the coefficients, say c1 is nonzero. Then


v1 = v2 , for some 6= 0. (If = 0, then v1 = 0 and v1 by definition is not an eigenvector.)
Multiplying both sides on the left by A gives

Av1 = 1 v1 = Av2 = 2 v2 .

On the other hand, multiplying the same equation by 1 and then subtracting the two
equations gives
0 = (2 1 )v2
which is impossible, since neither nor (1 2 ) = 0, and v2 6= 0.

It follows that if A22 has two distinct real eigenvalues, then it has two linearly independent
eigenvectors and can be diagonalized. In a similar way, if Ann has n distinct real eigenvalues,
it is diagonalizable.

Exercises:

1. Find the eigenvalues and eigenvectors of the matrix


 
2 1
A= .
1 3

Form the matrix E and verify that E 1 AE is diagonal.

2. List the two reasons a matrix may fail to be diagonalizable. Give examples of both
cases.

3. (**) An arbitrary 2 2 symmetric matrix (A = At ) has the form


 
a b
A= ,
b c

where a, b, c can be any real numbers. Show that A always has real eigenvalues. When
are the two eigenvalues equal?

4. (**) Consider the matrix  


1 2
A= .
2 1
Show that the eigenvalues of this matrix are 1+2i and 12i. Find a complex eigenvector
for each of these eigenvalues. The two eigenvectors are linearly independent and form
a basis for C2 .

You might also like