You are on page 1of 9

Math 121B, Fall10

Instructor : Yunho Kim


The Final Exam
Date : Dec. 10th
UID
Name
Class Lec. A, 2:00pm - 2:50am
Number Score
1.
a)
b)
2.
3.
a)
b)
4.
5.
6.
Total Score / 100
1
Problem 1. We dene a matrix A by
A =
_

_
1 0 0
1 0 0
1 1 1
_

_
.
a) (10pts) Is this matrix diagonalizable? Justify your answer.
Solution. First of all, 0 and 1 are the only eigenvalues of A. Since the characteristic polyno-
mial f(t) is t(1 t)
2
,
dim(E
0
) = 1, dim(E
1
) 2.
Let x E
1
, then
Ax = x (A I)x = 0
_

_
0 0 0
1 1 0
1 1 0
_

_
_

_
x
1
x
2
x
3
_

_
= O x
1
= x
2
= 0.
Therefore, dim(E
1
) = 1, which means that A is not diagonalizable.
b) (10pts) If A is diagonalizable, then nd a nonsingular matrix Q and a diagonal matrix D such
that
A = Q
1
DQ.
If A is not diagonalizable, then A will have a singular value decomposition A = UV

, where
U, V are unitary matrices and is a diagonal matrix in this case. Find U, , and V .
Solution. Since it is proved that A is not diagonalizable, A has a singular value decomposition.
First of all, we need to nd the eigenvalues of
A

A =
_

_
1 1 1
0 0 1
0 0 1
_

_
1 0 0
1 0 0
1 1 1
_

_
=
_

_
3 1 1
1 1 1
1 1 1
_

_
.
The characteristic polynomial f(t) is t(t 1)(t 4). So
1
= 4,
2
= 1,
3
= 0. Notice that
(A

A)
_

_
0
1
1
_

_
= O, (A

A I)
_

_
1
1
1
_

_
= O, (A

A 4I)
_

_
2
1
1
_

_
= O.
Let = {v
1
, v
2
, v
3
}, where
v
1
=
1

6
_

_
2
1
1
_

_
, v
2
=
1

3
_

_
1
1
1
_

_
, v
3
=
1

2
_

_
0
1
1
_

_
.
2
Then is an orthonormal basis for R
3
with
(A

A)v
1
= 4v
1
, (A

A)v
2
= v
2
, (A

A)v
3
= O,
which means that consists of eigenvectors of A

A. Hence, the singular values are

1
= 2 =
_

1
,
2
= 1 =
_

2
.
Therefore, we have
V =
_
v
1
v
2
v
3
_
=
_

_
2

6
1

3
0
1

3
1

2
1

2
_

_
and =
_

1
0 0
0
2
0
0 0 0
_

_
=
_

_
2 0 0
0 1 0
0 0 0
_

_
.
As for U, since A = UV

AV = U = U, we know that

1
u
1
= Av
1
,
2
u
2
= Av
2
.
So
u
1
=
1

1
Av
1
=
1

6
_

_
1
1
2
_

_
, u
2
=
1

2
Av
2
=
1

3
_

_
1
1
1
_

_
.
Now all we need to do is to nd a unit vector u
3
which is orthogonal to u
1
and u
2
, which can
be found to be
u
3
=
1

2
_

_
1
1
0
_

_
.
Therefore,
U =
_
u
1
u
2
u
3
_
=
_

_
1

6
1

3
1

2
1

6
1

2
2

3
0
_

_
.
We nally obtain a singular value decomposition
A = UV

=
_

_
1

6
1

3
1

2
1

6
1

2
2

3
0
_

_
2 0 0
0 1 0
0 0 0
_

_
2

6
1

6
1

6
1

3
0
1

2
_

_
.
3
Problem 2. (20pts) Let T be a normal operator on a nite dimensional real inner product space
V whose characteristic polynomial splits. Prove that V has an orthonormal basis of eigenvectors of
T, which is equivalent that T is self-adjoint.
Solution. Since the characteristic polynomial of T splits, if
1
, . . . ,
k
are the distinct eigenvalues
of T, then
V = K
1
K

k
.
First of all, we would like to show that K
1
, . . . , K

k
are orthogonal to each other. To see this, it
suces to show that for any nonzero v K

, w K

for distinct , {
1
, . . . ,
k
},
v, w = 0.
There exist m
1
> 0 and m
2
> 0 such that
(T I)
m1
v = 0 = (T I)
m11
v, (T I)
m2
w = 0 = (T I)
m21
w.
Let y
i
= (T I)
m1i
v, z
j
= (T I)
m2j
w for 1 i m
1
, 1 j m
2
. Notice that
y
1
, z
1
= Ty
1
, z
1
= y
1
, T

z
1
= y
1
, z
1

since T is normal and y


1
, z
1
are eigenvectors of T. Hence, y
1
, z
1
= 0. Now suppose that y
i
, z
1
= 0
for 1 i l 1. Then since y
l1
, z
1
= (T I)y
l
, z
1
= 0,
y
l
, z
1
= Ty
l
, z
1
= y
l
, T

z
1
= y
l
, z
1
.
Therefore, y
l
, z
1
= 0. Especially, y
m1
, z
1
= v, z
1
= 0. Likewise, it is easy to show that v, z
j
= 0
for 1 j m
2
, especially v, z
m2
= v, w = 0.
Since we just showed K
i
K
j
if i = j, it suces to consider the case when V = K

with
dim(V ) = n. Let be an orthonormal basis for V and A = [T]

. Then A is normal. Now consider


A as a normal operator on a nite dimensional complex inner product space C
n
. Then since A is
normal, there exists a unitary complex matrix U such that
A = UDU

.
Notice that D = I since is the only eigenvalue of T, which means that
A = UDU

= I.
4
Problem 3. Let V be a nite dimensional inner product space over F with inner product ,
V
and
dim(V ) = n. Let = {v
1
, v
2
, . . . , v
n
} be an orthonormal basis for V . We dene V

, a dual space
of V , by the set of all linear transformations from V to F. Then V

is a vector space over F. For


i = 1, 2, . . . , n, we dene f
i
: V F by
f
i
_
n

j=1
a
j
v
j
_
= a
i
for any a
1
, . . . , a
n
F.
a) (10pts) Show that {f
1
, f
2
, . . . , f
n
} is a basis for V

.
Solution. It is easy. All you need to check is that V

= span({f
1
, f
2
, . . . , f
n
}) and
f
1
, . . . , f
n
are linearly independent. It is easy to check that for f V

, v V ,
f(v) = f(v
1
)f
1
(v) +f(v
2
)f
2
(v) + +f(v
n
)f
n
(v),
that is,
f =
n

i=1
f(v
i
)f
i
.
This shows that {f
1
, f
2
, . . . , f
n
} spans V

. That {f
1
, f
2
, . . . , f
n
} is linearly independent is even
easier to prove.
5
b) (10pts) Let T : V V be a linear operator. Now we dene a pairing (or a function)
, : V V

F
by
x, f = f(x).
Notice that , is NOT an inner product. Then, we know
T(x), f = f(T(x)) for any x V, f V

,
which is well-dened. Show that there exists a unique linear operator T

: V

satisfying
f(T(x)) = T(x), f = x, T

(f) = (T

(f))(x) for any x V, f V

.
( T

is called the adjoint of T. This also holds true for innite dimensional inner product
spaces and is more general and more important than the adjoint we saw in class. ) Hint : You
need to dene T

explicitly. However, if you read the statement of the problem carefully, then
you will nd how T

has to be dened.
Solution. First of all, T(x), f = x, T

(f) is not an explicit denition of T

. It just provides an
idea of what T

should be. We already saw in the previous problem that {f


1
, f
2
, . . . , f
n
} is a basis
for V

. Hence, since T

is linear, it suces to dene T

(f
i
) which is an element in V

, i.e., T

(f
i
)
is a linear transformation from V to F for i = 1, 2, . . . , n. Now given x V , there exists a unique
representation of T(x) in the basis , i.e.,
T(x) = c
1
v
1
+c
2
v
2
+ +c
n
v
n
.
Since f
i
(T(x)) = c
i
, we dene
T

(f
i
)(x) = c
i
.
It is easy to check that T

(f
i
) is a well-dened linear transformation for i = 1, 2, . . . , n. Once this is
done, since we already saw that
f =
n

i=1
f(v
i
)f
i
,
we can now dene
T

(f) =
n

i=1
f(v
i
)T

(f
i
).
It is also easy to check that this denes a linear operator T

on V

which satises
T(x), f = x, T

(f)
for any f V

and any x V .
6
Problem 4. (20pts) Let V be an inner product space of continuous functions with inner product
f, g =
_

0
f(x)g(x)dx.
Let W be a subspace of V spanned by
{sinx, cos x, 1}.
Find the orthogonal projection of h(x) = x on W.
Solution. The Gram-Schmidt process gives
v
1
=
_
2

sin x, v
2
=
_
2

cos x, v
3
=
w
3
||w
3
||
,
where
w
3
= 1 1, v
1
v
1
1, v
2
v
2
= 1
4

sin x.
And
w
3
, w
3
= 1
4

sin x, 1
4

sin x
= 1, 1 + 21,
4

sinx +
4

sin x,
4

sin x
=
16

+
8

=
8

=

2
8

.
Hence,
=
_
v
1
=
_
2

sinx, v
2
=
_
2

cos x, v
3
=
_

2
8
_
1
4

sin x
__
is an orthonormal basis for W. Then the orthogonal projection of h(x) on W is
x, v
1
v
1
+x, v
2
+x, v
3
v
3
= 2 sinx
4

cos x +

2
_
1
4

sin x
_
=

2

4

cos x.
7
Problem 5. (10pts) Let A be an n n matrix. Show that
dim(span({I
n
, A, A
2
, A
3
, . . . })) n.
Solution. Simply speaking, the Caley-Hamilton theorem says that if f(t) is a characteristic poly-
nomial of A, then f(A) = 0. Notice that f(t) is a polynomial of degree n. Therefore, for some
a
0
, a
1
, . . . , a
n1
,
A
n
+a
n1
A
n1
+ +a
1
A+a
0
I
n
= 0,
which means that A
n
is a linear combination of I
n
, A, A
2
, . . . , A
n1
. It is now easy to show that
A
k
span({I
n
, A, A
2
, . . . , A
n1
})
for k n. Hence,
dim(span({I
n
, A, A
2
, A
3
, . . . })) n.
8
Problem 6. (10pts) Let T be a linear operator on a nite dimensional vector space V whose char-
acteristic polynomial splits. Let
1
, . . . ,
k
be the distinct eigenvalues of T. Show that T is diagonal-
izable if and only if rank(T
i
I) = rank((T
i
I)
2
) for 1 i k.
Solution. () If T is diagonalizable, then it is known that E

= K

, where E

is the eigenspace
and K

is the generalized eigenspace corresponding to the eigenvalue . And


N((T I)
m
) K

for all m. Especially,


N(T I) N((T I)
2
) K

= E

= N(T I),
which means that N((T I)
2
) = N(T I) and dim(N(T I)) = dim(N((T I)
2
)). Therefore,
rank(T I) = rank((T I)
2
).
Since
1
, . . . ,
k
are the distinct eigenvalues of T,
rank(T
i
I) = rank((T
i
I)
2
)
for 1 i k.
() We need to show that
dim(N(T
i
I)) = dim(N((T
i
I)
m
))
for any m. Now if rank(T
i
I) = rank((T
i
I)
2
) for 1 i k, then we notice that
dim(N(T
i
I)) = dim(N((T
i
I)
2
)).
Suppose that
dim(N(T
i
I)) = dim(N((T
i
I)
p
))
for 2 p m1. It is obvious that
dim(N(T
i
I)) dim(N((T
i
I)
m
)).
Let x N((T
i
I)
m
). Then (T
i
I)(x) N((T
i
I)
m1
) = N(T
i
I), which also means
that
x N((T
i
I)
2
) = N(T
i
I).
Therefore,
dim(N(T
i
I)) = dim(N((T
i
I)
m
)).
9

You might also like