You are on page 1of 105

UCLA Basic Exam Problems and Solutions Brent Woodhouse

This document covers nearly all problems on the UCLA Basic Exam from Fall 2001 to Spring 2013.
Problems are listed by category and by exam and linked below. Relevant definitions are listed at the start
of most categories below. The linear algebra section in particular starts with many standard theorems. I
cannot guarantee this material is completely accurate, but it should at least help you along the way. Good
luck!

Problems listed by category:


Analysis
Countability
Metric space topology
Topology on reals
Fixed point
Inverse and Implicit Function Theorems
Infinite sequences and series
Partial derivatives
Differentiation
Riemann integration
Taylor Series
Jacobian
Lagrange Multipliers
Miscellaneous

Linear Algebra
Recurring Problems
Other Problems

Problems listed by exam:


Fall 2001: 1 2 3 4 5 6 7 8 9 10
Winter 2002: 1 2 3 4 5 6 7 8 9 10 11
Spring 2002: 1 2 3 4 5 6 7 8 9 10 11
Fall 2002: 1 2 3 4 5 6 7 8 9 10
Spring 2003: 1 2 3 4 5 6 7 8 9 10
Fall 2003: 1 2 3 4 5 6 7 8 9 10
Spring 2004: 1 2 3 4 5 6 7 8 9 10
Fall 2004: 1 2 3 4 5 6 7 8 9 10
Spring 2005: 1 2 3 4 5 6 7 8 9 10 11 12 13
Fall 2005: 1 2 3 4 5 6 7 8 9 10
Winter 2006: 1 2 3 4 5 6 7 8 9 10
Spring 2006: 1 2 3 4 5 6 7 8 9 10
Spring 2007: 1 2 3 4 5 6 7 8 9 10 11 12
Fall 2007: 1 2 3 4 5 6 7 8 9 10 11 12
Spring 2008: 1 2 3 4 5 6 7 8 9 10 11 12
Fall 2008: 1 2 3 4 5 6 7 8 9 10 11 12
Spring 2009: 1 2 3 4 5 6 7 8 9 10 11 12
Fall 2009: 1 2 3 4 5 6 7 8 9 10 11 12
Spring 2010: 1 2 3 4 5 6 7 8 9 10 11 12
Fall 2010: 1 2 3 4 5 6 7 8 9 10 11 12
Spring 2011: 1 2 3 4 5 6 7 8 9 10 11 12
Fall 2011: 1 2 3 4 5 6 7 8 9 10 11 12
Spring 2012: 1 2 3 4 5 6 7 8 9 10 11 12
Fall 2012: 1 2 3 4 5 6 7 8 9 10 11 12
Spring 2013: 1 2 3 4 5 6 7 8 9 10 11 12

1
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Analysis

Countability

A set S is countable if there exists a one-to-one map f : S N.

Fall 2001 # 4. Let S be the set of all sequences (x1 , x2 , . . .) such that for all n,

xn {0, 1}.

Prove that there does not exist a one-to-one mapping from the set N = {1, 2, . . .} onto the set S.

Suppose for the sake of contradiction that there exists f : N S such that f is a bijection (one-to-one
and onto). Define the sequence (xn )
n=0 so that for all natural numbers n, xn = 0 if (f (n))n = 1 and xn = 1
if (f (n))n = 0. Then (xn )
n=1 S. Hence there exists some natural number M such that

(xn )
n=1 = f (M ).

But this means


xM = (f (M ))M ,
which contradicts the construction of xM . Thus no such f exists.

Fall 2003 # 1. Prove that R is uncountable. If you like to use the Baire category theorem, you have to
prove it.

Suppose for the sake of contradiction that the real numbers are countable, so there exists a sequence
(rn )
n=1 such that {rn : n 1} = R. Then we can choose a closed interval [a1 , b1 ] such that r1 / [a1 , b1 ].
Next, choose a subinterval [a2 , b2 ] [a1 , b1 ] such that r2
/ [a2 , b2 ]. Repeating this procedure inductively, we
select a decreasing sequence of intervals ([an , bn ]) n=1 such that for all n, rn / [an , bn ].
Now the an form an increasing sequence, the bn form a decreasing sequence, and for all n we have an bn .
In fact, for any natural numbers n, m with n m, an am bm . Letting m ,

an lim bm = inf bm .
m mN

Then taking the limit as n ,


sup an = lim an inf bm .
nN n mN

In particular, there exists some x [supnN an , inf nN bn ]. Then for all n we have x [an , bn ], so x 6= rn
by construction of an , bn . But since x R = {rn : n 1}, x = rn for some n, a contradiction. Hence R is
uncountable.

Fall 2005 #1. A real number is said to be algebraic if for some finite set of integers a0 , . . . , an , not all 0,

a0 + a1 + + an n = 0.

Prove that the set of algebraic real numbers is countable.

We assume that a countable union of countable sets is countable.


Let {Sn }nN be a sequence of countable sets. Define
[
S= Sn .
nN

For all n N, let Fn denote the set of all 1-1 maps from Sn to N. Since Sn is countable, Fn is non-empty.

2
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Using the axiom of countable choice, there exists a sequence {fn }nN such that fn Fn for all n N.
Let : S N N be the mapping defined by

(x) = (n, fn (x)),

where n is the smallest natural number such that x Sn . (Clearly {Sn : x Sn } is non-empty, so the Well-
Ordering Principle ensures that such an n exists.) Since each fn is 1-1, must be 1-1. By the Fundamental
Theorem of Arithmetic, g(n, m) = 2n 3m is an injection from N N N. Hence g is an injection from S
into N, so S is countable.
Clearly the set of integers is countable. Since each corresponds to a finite selection of integers, the set
of polynomials of degree n with integer coefficients is countable for each n. The set Z[x] of all polynomials
with integer coefficients is the union over all n of sets of polynomials of degree n. As a countable union of
countable sets, Z[x] is thus countable.
Note that the set A of algebraic real numbers is
[
A= {x R : p(x) = 0}.
pZ[x]

Now each p Z[x] has some finite degree n, and then can only have at most n real roots. Thus the set
{x R : p(x) = 0} is finite for each p Z[x]. Hence as a countable union of finite sets, A is countable, as
desired.

Fall 2005 #5. Prove carefully that R2 is not a (countable) union of sets Si , i = 1, 2, . . . with each Si being
a subset of some straight line Li in R2 .

Suppose for the sake of contradiction that R2 is a countable union of sets Si , i = 1, 2, . . . with each Si
being a subset of some straight line Li in R2 . Since R is uncountable, there exists some x R such that the
line Lx = {(x, y) : y R} is not equal to any Si . (Otherwise {(x, 0) : x R} would be countable, implying
that R was countable.) Now each line Si intersects Si at either zero or one point. Thus the set
[
Lx Si
iN

is countable. Since R2 is a countable union of the Si however,


[
Lx Si = Lx ,
iN

so Lx is countable. But this implies that R is countable, a contradiction.

Spring 2008 #7. Let a(x) be a function on R such that

(i) a(x) 0 for all x, and

(ii) There exists M < such that for all finite F R,


X
a(x) M.
F

Prove {x : a(x) > 0} is countable.

Define  
1
Sn = x : f (x) > .
n

3
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Fix n and suppose for the sake of contradiction that #(Sn ) > M n. Then there exists a subset F of Sn of
cardinality M n + 1. By property (ii), X
a(x) M.
F

However,
X X1 Mn + 1
a(x) = > M,
n n
F F

a contradiction. Thus Sn is finite for each n.


We have
[ 1
 [
{x : a(x) > 0} = x : a(x) > = Sn ,
n
nN nN

which is a countable union of finite sets, and thus countable. (Apparently assumption (i) is not needed.)

Fall 2011 #3. Prove that the set of real numbers can be written as the union of uncountably many pairwise
disjoint subsets, each of which is uncountable.

Define the map f : (0, 1) (0, 1) (0, 1) so that

f (x, y) = 0.x0 y0 x1 y1 . . .

where x = 0.x0 x1 . . . and y = 0.y0 y1 . . .. Here we replace any infinite chain of 9s in x or y by incrementing
the digit preceding the chain and replacing the chain of 9s by a chain of 0s, then evaluate f . Then this map
is well-defined and actually an injection. Consider the set S of all vertical lines in (0, 1) (0, 1). There are
uncountably many, and their union is (0, 1) (0, 1). Also, each vertical line is an uncountable set of points.
Now consider f (S). Note the only decimals that f misses form a countable set. Since f is an injection, for
each line L S, f (S) is uncountable. Also, the images of distinct lines of S under f are disjoint. Thus
[ [
(0, 1) = f (S) = f (L) (countable collection of points)
LS

can be written as a union of uncountably many pairwise disjoint subsets, each of which is uncountable.
Let g be a bijection between (0, 1) and R (tan((x 1/2)), for instance). Composing g with f above, we
can write the set of real numbers as the desired union.

4
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Metric space topology

X is compact if every open cover of X has a finite subcover.


X is complete if every Cauchy sequence of elements in X converges to some element of X.
X is connected if for every pair of non-empty open sets A and B with A B = X, A B 6= .
X is sequentially compact if every sequence of elements in X has a convergent subsequence.
X is totally bounded if for all > 0, there exists a finite open cover of X using balls of radius .
X is separable if there is a dense subset of X that is countable. (A dense subset S X is one which
satisfies S = X.)
A base of open sets for X is a family B of open subsets of X such that every open subset of X is the
union of sets in B.
X is second-countable if there is a base of open sets of X that is at most countable.
An accumulation point of a sequence (xn ) n=1 is a point x such that for each neighborhood B of x there
are infinitely many natural numbers i such that xi B.
A homeomorphism is a bijection f such that both f and f 1 are continuous.

Spring 2009 #4; Spring 2005 #13. Spring 2013 #3. Let (X, d) be an arbitrary metric space.

(a) Give a definition of compactness of X involving open covers.

(b) Define completeness of X.


(c) Define connectedness of X.
(d) Is the set of rational numbers Q (with the usual metric) connected? Justify your answer.

(e) Suppose X is complete. Show that X is compact in the sense of part (a) if and only if for every r > 0,
X can be covered by finitely many balls of radius r. (X is totally bounded.)

(a) X is compact if every open cover of X has a finite subcover.

(b) X is complete if every Cauchy sequence of elements in X converges to some element of X.

(c) X is connected if for every pair of non-empty open sets A and B with A B = X, A B 6= .

(d) No, the set of rational numbers (with the usual metric) is not connected. Let be an irrational
number. Consider the open sets S = Q (, ) and T = Q (, ). Clearly S and T are non-empty,
have union Q, and S T = . Let > 0. For any s S, B(s, s) S, hence S is open. Likewise, for
any t T , B(t, t ) T , hence T is open. (Here the balls are with respect to the usual metric restricted
to Q Q.) Thus we have exhibited two non-empty disjoint open subsets of Q with union Q, so Q is not
connected.

(e) Suppose X is complete. We show that X is compact if and only if X is totally bounded in multiple
steps:

Step 1: If X is compact, then X is sequentially compact.

Find a point to converge to: Let {yj }j=1 be a sequence in X. Suppose for the sake of contradiction
that for each x X, there exists = (x) > 0 such that only finitely many terms of the sequence {yj } lie
in B(x, (x)). Note that the set of open balls {B(x, (x)) : x X} forms an open cover of X. Since X is
compact, there is a finite subcover

X = B(x1 , (x1 )) B(xm , (xm )).

5
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Since yj belongs to B(xi , (xi )) for only finitely many indices j, we conclude that yj belongs to X for only
finitely many indices j, contradicting that yi X for all i. Hence there exists x X such that for each
> 0, B(x, ) contains infinitely many terms of the sequence {yj }.
Construct a convergent subsequence: Choose j1 so that yj1 B(x, 1). Inductively choose jn+1 so that
jn+1 > jn and yjn+1 B(x; 1/(n + 1)). Then {yjn } n=1 is a subsequence of {yj } that converges to x. Thus
X is sequentially compact.

Step 2: If X is sequentially compact, then X is totally bounded.

Select points which are spread out to form a sequence; at some point you must stop. Let > 0. Let
yS1 X. If X = B(y1 , ), we are finished. Otherwise, let y2 be any point in X \ B(y1 , ). As long as
n
j=1 B(yj , ) 6= X, select
n
[
yn+1 X \ ( B(yj , )).
j=1

Suppose for the sake of contradiction that this procedure does not terminate. Then the points y1 , y2 , . . .
satisfy
d(yk , yj )
for all 1 j < k. It follows that {yj }
j=1 has no convergent subsequence, contradicting the sequential
compactness of X. Thus the procedure does terminate, so there exists N with
N
[
X= B(yj , ).
j=1

Hence X is totally bounded.

Step 3: If X is totally bounded, then X is sequentially compact.

Construct a sequence of subsequences using the pigeonhole principle, then diagonalize. Let {xj } j=1 be a
sequence in X. Rewrite the sequence in the form {x1j } j=1 . By induction, we construct sequences {xkj }j=1 ,

k 2, with the properties


(i) {xkj }
j=1 is a subsequence of {xk1,j }j=1 , k 2.

(ii) {xkj }j=1 is contained in a ball of radius 1/k, k 2.
Suppose k 2 and we already have the sequences {xij } j=1 for i < k. Let B1 , . . . Bn be a finite number
of open balls of radius 1/k that cover X. Since there are infinitely many indices j and only finitely many
balls, there must exist at least one ball, say Bm such that xk1,j Bm for infinitely many j 1. Now let
xk1 be the first of the xk1,j s that belong to Bm , let xk2 be the second, etc. Then {xkj } j=1 has properties
(i) and (ii).
Now set yn = xnn , so that {yn }
n=1 is a subsequence of {xj }j=1 . Also note that {yn }n=k is a subsequence

of {xkj }j=1 for each k. Thus by construction of the xkj , for any n, m k,

d(yn , ym ) < 2/k.

Thus {yn }
n=1 is a Cauchy sequence, so {yn }n=1 converges, and X is sequentially compact.

Step 4: If X is totally bounded, then X is separable.

Direct approach. Let n be a positive integer. Then there exist xn1 , . . . , xnm such that the open balls with
centers at the xnj and radii 1/n cover X. The family {xnj : 1 j mn , 1 n < } is then a countable
subset of X. For each x X and each integer n, there is an xnj such that d(xnj , x) < 1/n. Consequently
the xnj are dense in X.

Step 5: If X is separable, then X is second-countable.

6
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Direct approach, similar to Step 4. Let {xj }


j=1 be a dense sequence in X. Consider the family of open
sets
B = {B(xj , 1/n) : j 1, n 1}.
Let U be an open set of X and let x U . For some n 1, we have B(x, 2/n) U . Choose j so that
d(xj , x) < 1/n. Then x B(xj , 1/n), and the triangle inequality shows that B(xj , 1/n) B(x, 2/n) U .
Thus each x U has some associated Vx B such that x Vx and Vx U . It follows that
[
U= Vx
xU

represents U as a union of sets in B. Thus B is a base of open sets. Since B is countable, X is second-
countable.

Step 6: If X is second-countable, then every open cover of X has a countable subcover. (Lindelofs
Theorem).

Pick elements of the base which are inside sets from the open cover. They cover X. Let {U }A be an
open cover of X, where A is some index set. Let B be a countable base of open sets. Let C be the subset of
B consisting of those sets V B such that V U for some . We claim that C is a cover of X. Indeed, if
x X, then there is some index such that x U . Since B is a base and U is open, there exists V B
such that x V and V U . In particular, V C, so C covers X.
For each V C, select one index = (V ) such that V U(V ) . Then the sets {U(V ) : V C} cover
X. Since B is countable, so is C, so that the U(V ) s form a countable subcover of X.

Step 7: If X is sequentially compact and every open cover of X has a countable subcover, then every
open cover of X has a finite subcover.

Argue by contradiction. Make a sequence, get subsequence, use completeness of X, and that a set is
closed. Let {Un }n=1 be a sequence of open subsets of X that cover X. Suppose for the sake of contradiction
that for all positive integers m,
X 6= U1 Um .
Sm
For each m, let xm be any point in X \ ( j=1 Uj ). Since X is sequentially compact, the sequence (xm ) m=1
has a subsequence
Sm which converges. Since X is complete,
Sm this subsequence converges
Sm to some x X. Now
xj X \ ( j=1 Uj ) for all j m. Thus since X \ ( j=1 Uj ) is closed, x X \ ( j=1 Uj ). But this is true
for all m, hence
[
xX \( Uj ) = ,
j=1

a contradiction. Thus {Un }


n=1 has a finite subcover.

Note: Taos Analysis II, Ch. 12 also has an argument that sequential compactness implies compactness.

Fall 2004 #4. Suppose that (M, ) is a metric space, x, y M , and that {xn } is a sequence in this metric
space such that xn x. Prove that (xn , y) (x, y).

Let > 0. Since xn x, there exists N such that for all n N , (xn , x) . By the reverse triangle
inequality, for any n N ,
|(xn , y) (y, x)| (xn , x) .
Thus (xn , y) (x, y).

Fall 2002 #1. Let K be a compact subset and F be a closed subset in the metric space X. Suppose
K F = . Prove that
0 < inf{d(x, y) : x K, y F }.

7
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Suppose for the sake of contradiction that inf{d(x, y) : x K, y F } = 0. Then for each n, there exist
xn K and yn F such that d(xn , yn ) < 1/n. Since K is compact, K is sequentially compact and complete,
so there exists a subsequence {xnj }
j=1 that approaches some x K. It follows that ynj x as j .
Since F is closed, x F . Hence x K F , a contradiction. Thus

0 < inf{d(x, y) : x K, y F }.

Fall 2008 #4. (a) Suppose that K and F are subsets of R2 with K closed and bounded and F closed.
Prove that if K F = , then d(K, F ) > 0. Recall that

d(K, F ) = inf{d(x, y) : x K, y F }.

Fall 2010 #1. Also show the converse, that if K X is compact and

inf d(x, y) > 0,


xK,yF

then K F = .

(b) Is (a) true if K is just closed? Prove your assertion.

(a) This is the above exercise with X = R2 , since R2 is complete with respect to the standard metric.

For the converse, suppose for the sake of contradiction that K F 6= . Then there exists x K F ,
and 0 = d(x, x) {d(x, y) : x K, y F }, so d(K, F ) = 0, a contradiction. Thus K F = .

No, part (a) is no longer true if K is just closed. Let


1
K = {(n, 0) : n N} and F = {(n + , 0) : n N \ {0}}.
2n
Note that K and F contain all their limit points, so they are closed. However, there are points in F and K
1
with distance 2n for each n N, so d(K, F ) = 0.

Spring 2002 #3. Suppose that X is a compact metric space (in the covering sense of the word compact).
Prove that every sequence {xn : xn X, n = 1, 2, 3 . . .} has a convergent subsequence. [Prove this directly.
Do not just quote a theorem.]

Let {xn }
n=1 be a sequence in X. Let > 0. Suppose for the sakeSof contradiction that there does not
exist x X such that {i N \ {0} : xi B(x, )} is infinite. Clearly xX B(x, ) covers X, so since X is
compact, there exists a finite cover

X = B(x1 , ) B(xN , ).

Now
N
[
{i N \ {0} : xi X} = {i N \ {0} : xi B(xj , )}.
j=1

Clearly the left hand side is an infinite set, but by assumption, each set in the union on the right hand side
is finite, so the union on the right hand side is finite, a contradiction. Hence there exists some x X such
that {i N \ {0} : xi B(x, )} is infinite.
Select xn1 B(x, 1). Then for each k 1, inductively select xnk+1 > xnk with xnk+1 B(x, 1/(k + 1)).
It follows that (xnk )
k=1 converges to x, hence {xn }n=1 has a convergent subsequence.

Spring 2005 #12. Let (X, d) be a metric space. Prove that the following are equivalent:

8
UCLA Basic Exam Problems and Solutions Brent Woodhouse

(a) There is a countable dense set.

(b) There is a countable basis for the topology.


Recall that a collection of open sets U is called a basis if every open set can be written as a union of
elements of U .

Let {xn }
n=1 be a countable dense set in X. Let B be a basis for the topology. It follows that

{B(xn , 1/m), n 1, m 1}

is a countable basis for the topology. To see this, let U be an open set in X and let x U . For some
m 1, we have B(x, 2/m) U . Choose n so that d(xn , x) < 1/m. Then x B(xn , 1/m), and the triangle
inequality implies B(xn , 1/m) B(x, 2/m) U . Thus each x U has some associated Vx B such that
x Vx and Vx U . It follows that [
U= Vx
xU

represents U as a union of sets in B. Thus B is a countable basis for the topology.


Conversely, suppose {Bn } n=1 is a countable basis for the topology. Choose xn Bn for each n 1. It
follows that {xn } is dense in X, so there is a countable dense set in X.

Spring 2005 #6. Let X be the set of all infinite sequences {n }


n=1 of 1s and 0s endowed with the metric


X 1
dist({n } 0
n=1 , {n }n=1 ) = | n0 |.
n n
n=1
2

Give a direct proof that every infinite subset of X has an accumulation point.

An accumulation point of a sequence (xn )


n=1 is a point x such that for each neighborhood B of x there
are infinitely many natural numbers i such that xi B. Let S0 be an infinite subset of S. Now either
infinitely many sequences in S0 start with 0 or infinitely many sequences in S0 start with 1. Let x1 be 0 if
infinitely many sequences in S0 start with 0 and otherwise, let x1 = 1. Note that if x1 = 1, then infinitely
many sequences in S0 start with 1. Now for each n 0, inductively let Sn+1 be the set of sequences in Sn
whose n-th digit is xn . By construction Sn+1 is infinite. Define xn+1 to be 0 if infinitely many sequences in
Sn+1 have n + 1-st digit 0, and xn+1 = 1 otherwise.
Thus we form a sequence (xn )
n=1 such that for each N 1, there exists {n }n=1 S such that n = sn
for all n N . Hence

X 1 X 1 X 1 1
dist({n }
n=1 , {xn }n=1 ) = n
|n xn | = n
|n xn | n
N.
n=1
2 2 2 2
n=N +1 n=N +1

Since N is arbitrary, this shows that {xn }


n=1 is an accumulation point of S.

Spring 2005 #7. Let X, Y be two topological spaces. We say that a continuous function f : X Y is
proper if f 1 (K) is compact for any compact set K Y .

(a) Give an example of a function that is proper but not a homeomorphism.

(b) Give an example of a function that is continuous but not proper.

(c) Suppose f : R R is C 1 (that is, has a continuous derivative) and for all x R,

|f 0 (x)| 1.

9
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Show that f is proper.

(a) Pick any continuous function which is not a bijection. For example, f : R R given by f (x) = 0 is
continuous and proper, but f is clearly not a bijection, so f is not a homeomorphism.

(b) Let X be a non-compact metric space (R for example) and let Y = {0}. Then the constant function
f : X Y is continuous, but f 1 ({0}) = X is not compact, and {0} is compact. Hence f is not proper.

(c) Let K Y be a compact set. Let {U }I be an open cover of f 1 (K). It follows that {f (U )}I
is an open cover of f (f 1 (K)) = K. Since K is compact, there exists a finite subcover {f (Un )}N
n=1 of K.
Let x f 1 (K). Then since {f (Un )}N
n=1 covers K, f (x) f (Uj ) for some 1 j n. Hence there exists
y Uj such that f (x) = f (y).
By the mean value theorem (valid since f is C 1 ), there exists c (x, y) such that

f (x) f (y) = f 0 (c)(x y).

Thus by the given property of f ,

0 = |f (x) f (y)| = |f 0 (c)||x y| |x y|.


1
Thus x = y, so x Uj . Thus {Un }N
n=1 is a finite cover of f (K), so f 1 (K) is compact, and f is proper.

Spring 2008 #6. Let Y be a complete countable metric space. Prove there is y Y such that {y} is open.

Suppose for the sake Sof contradiction that {y} has non-empty interior for each y Y . Then {y} is
nowhere dense, and Y = yY {y} is a countable union of closed nowhere-dense sets since Y is countable.
But this contradicts the Baire Category Theorem, since Y is a complete metric space. Hence there exists
some y Y such that {y} has non-empty interior. It follows that y is an interior point of {y}, hence {y} is
open.

Spring 2010 #8. Let (X, d) be a complete metric space and let K be a closed subset of X such that for
any > 0, K can be covered by a finite number of sets B (x), where

B (x) = {y X : d(x, y) < }.

Prove that K is compact.

Follow the proof in Spring 2009 #4.

Fall 2012 #3; Fall 2011 #6; Spring 2008 #4; Winter 2006 #4. Let {fn (x)} be a sequence of
non-negative continuous functions on a compact metric space X. Assume fn (x) fn+1 (x) for all n and x,
so that limn fn (x) = f (x) exists for every x X. Prove f is continuous if and only if fn converges to f
uniformly on X.

The forward direction is called Dinis Theorem. Let > 0. For each n, let gn = fn f , and define
En := {x X : gn (x) < }. Each gn is continuous, so each En is open. Since {fn } is monotonically
decreasing, {gn } is monotonically decreasing, so En En+1 for all n 1. Since fn converges pointwise to f ,
it follows that the collection {En } is an open cover of X. Since X is compact, there exist n1 < n2 < < nK
with
X = En1 En2 EnK = EnK .
Thus for any n nK and x X, x EnK , so

|fn (x) f (x)| = fn (x) f (x) fnK (x) f (x) = gnK (x) .

10
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Thus fn converges to f uniformly on X.


Suppose fn converges to f uniformly on x. Let > 0. Select N such that for all n N and all x X,

|fn (x) f (x)| /3.

Since fN is continuous, there exists > 0 such that if x, y X with d(x, y) , then

|fN (x) fN (y)| /3.

It follows that for any x, y X with d(x, y) ,

|f (x) f (y)| |f (x) fN (x)| + |fN (x) fN (y)| + |fN (y) f (y)| /3 + /3 + /3 = .

Thus f is continuous.

Fall 2012 #4. A subset K of a metric space (X, d) is called nowhere dense if K has empty interior, (i.e., if
U K, U open in X imply U = .) Prove the Baire theorem that if (X, d) is a complete metric space, then X
is not a countable union of closed nowhere-dense sets. Hint: Assume X = n Kn where each Kn is closed and
nowhere dense. Show there is x1 X and 0 < 1 < 1/2 such that B1 = B(x1 , 1 ) = {y X : d(y, x) < 1 }
satisfies B1 K1 = and there is x2 X and 0 < 2 < 21 such that B2 = B(x2 , 2 ) satisfies B2
SB1 and
B2 K2 = . Then continue by induction to find a sequence {xn } in X that converges to x X \ n=1 Kn .
S
Let Kn , n 1 be closed nowhere-dense sets such that X = n1 Kn . Choose some x1 X \ K1 . Since
K1 is closed, X \ K1 is open, so there exists 0 < 1 < 1/2 such that B1 := B(x1 , 1 ) X \ K1 . Then
B1 K1 = . Suppose inductively that n 2 and we have selected Bn = B(xn , n ) such that Bn Bn1
and Bn Kn = . Select xn+1 Bn \ Kn+1 . (Since Kn+1 is nowhere dense, it cannot contain Bn , so some
such xn+1 exists.) Now Kn+1 is closed, so Bn \ Kn+1 is open, hence there exists 0 < n+1 < n /2 such that
B(xn+1 , n+1 ) (Bn \ Kn+1 ). Then Bn+1 Bn and Bn+1 Kn+1 = , completing the induction.
Choosing countably many xn and Bn in this way requires the axiom of countable choice. It follows that
n 1 /2n1 , so the sequence (xn )
n=1 is a Cauchy sequence. Since (X, d) is complete, (X, d) converges to
some x X. Because Bn+1 Bn for each n, it follows that x Bn for each n 1. Since Bn Kn = , we
must have x / Kn for each n 1, hence

[
xX\ Kn = ,
n=1

a contradiction. Hence X is not a countable union of closed nowhere-dense sets.

Winter 2002 #1. Spring 2012 #1. Let denote the set of all closed subsets of [0, 1] and let :
[0, 1] be defined by
(A, B) := max{sup inf |x y|, sup inf |x y|}.
xA yB yB xA

Show that (, ) is a metric space.

Clearly is non-negative and symmetric.


Suppose A and B are closed subsets of [0, 1] with (A, B) = 0. Then

sup inf |x y| = 0,
xA yB

which implies that for any x A,


inf |x y| = 0.
yB

Thus x is a limit point of B, and since B is closed, x B. Hence A B.


Likewise, (A, B) = 0 implies supyB inf xA |x y| = 0; reasoning as above implies B A. Thus A = B.

11
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Finally, we verify that satisfies the triangle inequality on . Let A, B, C be closed subsets of [0, 1]. For
any a A, b B, c C,
|a b| |a c| + |c b|.
Taking the infimum of both sides over all b B,

inf |a b| |a c| + inf |c b|.


bB bB

It follows that
inf |a b| |a c| + sup inf |c b|,
bB cC bB

Then taking the infimum over all c C,

inf |a b| inf |a c| + sup inf |c b|.


bB cC cC bB

Finally, taking the supremum over all a A,

sup inf |a b| sup inf |a c| + sup inf |c b|.


aA bB aA cC cC bB

Thus
sup inf |a b| (A, C) + (C, B).
aA bB

By symmetry of A and B,
sup inf |a b| (A, C) + (C, B).
bB aA

Thus
(A, B) (A, C) + (C, B).

12
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Topology on reals
A family of functions is equicontinuous if for every > 0, there exists a > 0 such that if x1 , x2 X
with d(x1 , x2 ) < , then for any f F , d(f (x1 ), f (x2 )) < .

Winter 2006 #6. Let < a < b < . Prove that a continuous function f : [a, b] R attains all values
in [f (a), f (b)].

Let y [f (a), f (b)]. If y = f (a) or y = f (b), we are done. Otherwise, f (a) < y < f (b). Define

E := {x [a, b] : f (x) < y}.

Clearly E is a subset of [a, b] and is hence bounded. Also, a E, so E is non-empty. By the least upper
bound principle,
c := sup(E)
is finite. Clearly c [a, b].
1
Select N such that c N a. By definition of supremum, there must exist xn E with

1
c xn c
n
for all n N . Letting N , it follows that limn xn = c. Since f is continuous at c, this implies

lim f (xn ) = f (c).


n

But since xn E, f (xn ) < y for every n. This implies f (c) y.


Since f (c) y < f (b), c < b. Choose N such that c + n1 < b for all n N . Then c + 1
n
/ E for all
n N , so
1
f (c + ) y
n
for all n N . Taking the limit as n and using the continuity of f ,

f (c) y.

Thus f (c) = y, hence f attains all values in [f (a), f (b)].

Fall 2004 #2. State and prove Rolles Theorem. (You can use without proof theorems about the maxima
and minima of continuous or differentiable functions.)

Rolles Theorem: Let a < b, f be a continuous function on [a, b] which is differentiable on (a, b), and
suppose f (a) = f (b). Then there exists c (a, b) such that f 0 (c) = 0.
Proof: Since f is continuous on the compact set [a, b], it attains its maximum and minimum on [a, b].
If both the maximum and minimum occur at the endpoints a and b, then f (a) = f (b) implies that f is
constant, so taking c = (a + b)/2 (a, b), f 0 (c) = 0. Otherwise, there exists c (a, b) such that f (c) is either
the maximum or minimum of f on [a, b].
Suppose that f (c) is the maximum of f on [a, b] (the other case is similar). For every h > 0,

f (c + h) f (c)
0,
h
thus letting h 0 from the right,
f 0 (c+) 0.
Likewise, for every h < 0,
f (c + h) f (c)
0,
h

13
UCLA Basic Exam Problems and Solutions Brent Woodhouse

thus letting h 0 from the left,


f 0 (c) 0.
Since f is differentiable at c, f 0 (c) = f 0 (c+) = f 0 (c), so f 0 (c) = 0.

Spring 2011 #7. Prove that there is a real number x such that
x5 3x + 1 = 0.

Let f (x) = x5 3x + 1. Clearly f is continuous. Also, f (2) = 25 and f (0) = 1. Thus by the
intermediate value theorem, there exists x [2, 0] such that f (x) = 0.

Fall 2001 #1. Let K be a compact set of real numbers and let f (x) be a continuous real-valued function
on K. Prove there exists x0 K such that f (x) f (x0 ) for all x K.

Steps: Show f is bounded, so supremum of f on K exists. Find a sequence that converges to the
supremum. Use sequential compactness to get a subsequence converging to some d [a, b]. Use continuity.
Because K is compact, the Heine-Borel Theorem implies it is closed and bounded. By the Bolzano-
Weierstrauss Theorem, K is also sequentially compact.
Suppose for the sake of contradiction that f is unbounded. Then for each natural number n, there exists
xn K such that f (xn ) > n. Since K is sequentially compact, there exists a subsequence of {xn } n=0
converging to some x R. Since K is closed, x K. Since f is continuous, f (xn ) f (x). But f (xn ) is
unbounded as n , a contradiction. Hence f is bounded.
Thus by the least upper bound principle, M := sup(f (K)) exists. Select a sequence (xn )
n=1 such that

1
M f (xn ) M
n
for each n. Thus (f (xn ))
n=1 converges to M . Since K is sequentially compact, there exists a subsequence
(xnk )
k=1 which converges to some x0 R. Since K is closed, x0 K. Now (f (xnk ))
k=1 must converge to
M . By continuity of f , this implies f (x0 ) = M . Thus f attains its maximum on K.

Fall 2002 #2. Show why the Least Upper Bound Property (every set bounded above has a least upper
bound) implies the Cauchy Completeness Property (every Cauchy sequence has a limit) of the real numbers.

Let (xn )
n=1 be a Cauchy sequence of real numbers. We first show that (xn )n=1 is bounded. Fix > 0
and let N be such that |xn xm | < for n, m N . Let R = max(d(xN , x1 ), . . . , d(xN 1 , xN ), ). Then the
entire sequence (xn )
n=1 is contained in B(xn , 2R). Thus (xn )n=1 is bounded.
By the least upper bound property, we can define
zn := sup{xk : k n}
for each n 1. Clearly (zn )
n=1 is decreasing and bounded since (xn )n=1 is bounded. The least upper bound
property implies the greatest lower bound property, thus we can define
x = inf{zn : n 1} = lim zn .
n

We show that (xn )


n=1 x. First, we exhibit a subsequence of (xn )n=1 which converges to x. Let j be
a positive integer. Since x is the limit of the zn , there exists Nj such that for all n Nj ,
1
|zn x| .
2j
Since zNj = sup{xk : k n}, there exists nj Nj such that
1
|zNj xnj | .
2j

14
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Thus we obtain infinitely many distinct nj ; re-index the distinct nj to obtain the subsequence {xn` }
`=1 . By
construction,
1 1 1
|xn` x| |xn` zN` | + |zN` x| + = .
2` 2` `
Thus (xn` )
`=1 converges to x.
Now since (xn )
n=1 is a Cauchy sequence, it follows that the whole sequence converges to x. Let > 0.
There exists N such that for all ` N ,
|xn` x| /2.
There also exists N 0 such that if j, k N 0 ,

|xj xk | /2.

Hence for all n max(nN , N 0 ),

|xn x| |xn xnN | + |xnN x| /2 + /2 = .

Thus (xn )
n=1 converges to x.

Winter 2002 #4. Prove that the set of irrational numbers in R is not a countable union of closed sets.

Suppose for the sake of contradiction that the set of irrational numbers I can be represented as
[
I= FN
nN

where the FN are closed. Then [ [


R=( Fn ) ( {r}).
nN rQ

Since R has non-empty interior, the Baire Category Theorem implies that one of the sets in the union on
the right hand side has non-empty interior. Clearly it is not {r} for some rational r, so some Fn must have
non-empty interior. Thus there exists x Fn such that B(x, ) Fn I. But the rationals are dense in R,
so some rational number is an element of B(x, ) and thus an element of I, a contradiction.

Fall 2002 # 3; Spring 2002 #2 Show that the set Q of rational numbers in R is not expressible as the
intersection of a countable collection of open subsets of R.
Fall 2012 #5. Use the Baire Category Theorem to prove this.
T
Suppose for the sake of contradiction that Q = nN Un , where Un is open for each n. Clearly Q Un
for each n, and since the rational numbers are dense in R, each Un is dense in R. For each rational number
r, X \ {r} is open and dense in R. Thus

\ \
=I Q= X \ {r} Un
rQ nN

is a countable intersection of dense open sets. Applying the Baire Category Theorem however, we expect
that is dense in R, a contradiction.

Spring 2003 #3. Find a subset S of the real numbers R such that both (i) and (ii) hold for S:
(i) S is not the countable union of closed sets.

(ii) S is not the countable intersection of open sets.

15
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Let A be a subset of [0, 1] that is not a countable union of closed sets, and let B be a subset of [2, 3] that
is not a countable intersection of open sets. (Irrationals and rationals for instance, respectively.) We show
that S := A B satisfies (i) and (ii).
Suppose for the sake of contradiction that S is the countable union of closed sets {Fn } n=1 . Then


!
[ [
A = S [0, 1] = Fn [0, 1] = (Fn [0, 1]).
n=1 n=1

Note that Fn [0, 1] is closed for each n, so A is a countable union of closed sets, a contradiction.
Likewise, if S is the countable intersection of open sets, it follows that B is the countable intersection of
open sets, a contradiction. Hence S satisfies (i) and (ii).

Spring 2002 #1. Prove that the closed interval [0, 1] is connected.

Suppose there exist disjoint non-empty open sets A, B such that A B = [0, 1]. Suppose without loss of
generality that 1 B. Clearly [0, 1] is bounded, thus A is bounded, so by the least upper bound principle,
we can define
c = sup(A).
Since [0, 1] is closed, c [0, 1]. We show that c cannot be in either A or B. Suppose for the sake of
contradiction that c A. Note that c < 1 since 1 B and A and B are disjoint. Since A is open, there
exists some ball of radius > 0 at c such that B(c, )[0, 1] A. But then c+min(, 1c) A, contradicting
that c = sup(A).
Suppose for the sake of contradiction that c B. If c = 0, then A = {0}, which is closed, a contradiction.
Hence c > 0. Since B is open, there exists some ball of radius at c such that B(c, ) [0, 1] B. But then
c min(, c) is an upper bound for A, contradicting that c = sup(A).

Winter 2002 #3. Prove that the open ball in R2

{(x, y) R2 : x2 + y 2 < 1}

is connected. [You may assume that intervals in R are connected. You should not just quote other general
results, but give a direct proof.]

Lemma: The image of a connected set under a continuous function is connected.


Proof: Let S be connected and f be continuous. Suppose for the sake of contradiction that f (S) is
disconnected, so f (S) = A B, with A and B disjoint non-empty open sets. It follows that f 1 (A)
and f 1 (B) are disjoint and non-empty. Since f is continuous, f 1 (A) and f 1 (B) are also open. But
S = f 1 (A) f 1 (B), so S is not connected, a contradiction. S
Lemma: Let S S M X be connected sets. Suppose S 6= . Then S is connected. T
Proof: Let S = S = G H, where G, H are non-empty disjoint open sets. Choose x0 S . Fix
. Note S = (S G) (S H) and x0 S . If x0 G, since S is connected, we get S H = . Since
this holds for all , S H = . Since H S , H = , a contradiction.
Let [0, 2) and define f : R R2 by f (t) = (t cos(), t sin()). Then
S f is continuous, so by the first
lemma f ([0, 1]) is open for each . We may write the open unit ball as [0,2) f ([0, 1]). Note also that
S
(0, 0) f ([0, 1]) for each . Hence by the second lemma, [0,2) f ([0, 1]) is connected, so the unit ball is
connected.

Winter 2002 #2. Prove that the unit interval [0, 1] is sequentially compact, i.e., that every infinite sequence
has a convergent subsequence. [Prove this directly. Do not just quote general theorems like Heine-Borel].

Let (xn )
n=1 be an infinite sequence in [0, 1]. Clearly this sequence is bounded. Let I0 = [0, 1]. Let n0 = 0.
If the left half of I0 contains infinitely many terms of (xn )n=2 , set I1 = [0, 1/2]. Otherwise, the right half of
I0 must contain infinitely many terms of the sequence; set I1 = [1/2, 1]. Now assume inductively that k 1

16
UCLA Basic Exam Problems and Solutions Brent Woodhouse

and we have chosen nj Ij for all j < k and constructed Ik of length 2k such that infinitely many terms of
the sequence (xn )n=1 lie in Ik . Select nk > nk1 such that nk Ik . If the left half of Ik contains infinitely
many terms of (xn ) 0 , set Ik+1 to be the left half of Ik . Otherwise, the right half of Ik contains infinitely
many terms of (xn ) n=0 ; set Ik+1 to be the right half of Ik . This completes the induction, so we have an
infinite subsequence (xnk )k=0 such that xnk Ik for each k. Since Ik+1 Ik for each k, with the length of
Ik given by 2k , (xnk )
k=0 is a Cauchy sequence. Thus this subsequence converges to some x [0, 1], since
[0, 1] is closed.

Fall 2009 #1. (i) For each n N let fn : N R be a function with |fn (m)| 1 for all m, n N.
Prove that there is an infinite subsequence of distinct positive integers ni , such that for each m N, fni (m)
converges.

(ii) For ni as in (i), assume that in addition limm limi fni (m) exists and equals 0. Prove or disprove:
The same holds for the reverse double limit limi limm fni (m).

(i) Consider the set of functions from N to R whose images lie in [1, 1]. Define the norm

d(f, g) = sup |f (m) g(m)|


mN

on , so that (, d) is a metric space. Note that d(f, g) 2 for any f, g , so clearly (, d) is totally
bounded. It also follows easily that (, d) is closed. Thus by a well-known theorem for metric spaces, is
sequentially compact, so there exists a subsequence (fni )
i=0 of (fn )n=0 such that (fni )i=0 converges with

respect to d. This implies that (fni (m))i=1 converges with respect to the usual norm on R for all m N.

(ii) Consider fn (m) = 1 for n < m and fn (m) = 0 for n m. Then fn , and

lim lim fni (m) = lim 0 = 0.


m i m

However,
lim lim fni (m) = lim 1 = 1.
i m i

This serves as a counterexample to the given statement.

Spring 2002 #4; Spring 2003 #1; Spring 2009 #6. (a) Define uniform continuity of a function
F : X R, X a metric space.

(b) Prove that a function f : (0, 1) R is the restriction to (0, 1) of a continuous function F : [0, 1] R if
and only if f is uniformly continuous on (0, 1).

(a) F : X R is uniformly continuous if for all > 0, there exists > 0 such that whenever d(x, y) < ,
|F (x) F (y)| .

(b) Suppose F : [0, 1] R is continuous and f = F |(0,1) . Show f is uniformly continuous. Let
S > 0. For each x [0, 1], let x > 0 be such that if |x y| x , then |F (x) F (y)| . Note that
nN {x : x < 1/(n + 1)} = [0, 1]. Since [0, 1] is compact, there exists a finite subcover of [0, 1] consisting
of sets {x : x < 1/(n + 1)}. Since these are decreasing as n increases, there exists some natural number N
such that {x : x < 1/N } = [0, 1]. Thus f is uniformly continuous. (Faster: F is continuous on a compact
set, so F is uniformly continuous. Thus f is uniformly continuous.)
Suppose f is uniformly continuous on (0, 1). Show there exists a continuous function F : [0, 1] R
such that F |(0,1) = f . Define F : [0, 1] R such that F (x) = f (x) for x (0, 1) and F (0) = f (0+),
F (1) = f (1). By construction, F is continuous at 0 and 1. Since f is uniformly continuous on (0, 1), F is
continuous on (0, 1). Thus F is continuous on [0, 1]. We also have F |(0,1) = f by construction.

17
UCLA Basic Exam Problems and Solutions Brent Woodhouse


Spring 2004 #2. Is f (x) = x uniformly continuous on [0, )? Prove your assertion.
Spring 2006 #5. Prove that if 0 < < 1, then F (x) = x is unformly continuous on [0, ).
Fall 2008 #1. For which of the values a = 0, 1, 2 is the function f (t) = ta uniformly continuous on
[0, )? Prove your assertions.

Consider f (t) = t for 0 < < 1. Let > 0 and take = /. If x, y [1, ) and |x y| < , then by
the mean value theorem, there exists c (x, y) such that

f (x) f (y) = f 0 (c)(x y) = c1 (x y) (x y) .

Thus f is uniformly continuous on [1, ). Now x is continuous on [0, 1], so since [0, 1] is compact, x is
uniformly continuous on [0, 1].
For each > 0, there exists 1 > 0 such that if x, y [0, 1] and |xy| 1 , then |f (x)f (y)| /2. There
also exists 2 > 0 such that if x, y [1, ) and |x y| 2 , then |f (x) f (y)| /2. Let = min(1 , 2 ).
Suppose x, y [0, ). If x, y [0, 1] or x, y [1, ), clearly |f (x) f (y)| . Otherwise, suppose without
loss of generality that x [0, 1] and y [1, ). Then if |x y| , |x 1| and |y 1| , so

|f (x) f (y)| |f (x) f (1)| + |f (1) f (y)| /2 + /2 = .

Thus f is uniformly continuous on [0, ).


Clearly f (t) = ta is uniformly continuous on [0, ) for a = 0, 1. But f (t) = t2 is not uniformly continuous
on [0, ). Let > 0. Note that by selecting x = n, y = n + for n 1/(2), we have |x y| , but

|f (y) f (x)| = 2n + 2 2n 1.

Thus f is not uniformly continuous on [0, ).

Fall 2009 #6. Consider the function f (x, y) = sin3 (xy) + y 2 |x| defined on the region S R2 given by

S = {(x, y) R2 ; x2010 + y 2010 1}.

Define what it means for f to be uniformly continuous on S and prove that f is indeed uniformly continuous.
(You can use any theorem you wish in the proof, as long as it is stated correctly and you justify properly why
it can be applied, e.g., if you are using a general theorem on continuous functions, show that the function in
question is indeed continuous, and if you are using a metric property of a set explain why it has it.)

We say f is uniformly continuous on S if given > 0, there exists > 0 such that for any (x, y), (x0 , y 0 ) S
with |(x, y) (x0 , y 0 )|2 , we have
|f (x, y) f (x0 , y 0 )| .
Clearly the projections from R2 to {0}R and R{0} are continuous. Also, |x| and sin(x) are continuous.
Then f (x, y) is the composition, addition, and product of continuous functions, hence it is continuous.
We show that S is a compact set, so since f is continuous on the compact set S, f is uniformly continuous
/ S. Thus S is bounded. Let ((xn , yn ))
on S. If |x| > 1 or |y| > 1, then x2010 + y 2010 > 1, so (x, y) n=1 be a
sequence of points in S which converges to some (x, y). By definition of S,

x2010
n + yn2010 1

for all n. Now g((x, y)) = x2010 + y 2010 is a continuous function on R2 (iterating that a product or sum of
continuous functions is continuous). Thus (g((xn , yn )))
n=1 approaches g(x, y). The terms in (g((xn , yn )))n=1
2010 2010
are inside the closed set [0, 1], hence g(x, y) [0, 1]. Hence x +y 1, so S is closed. As a closed and
bounded set in R2 , S is compact.

Spring 2012 #3. Prove the Bolzano-Weierstrass theorem in the following form: Each sequence (an )nN of
numbers an in the closed interval [0, 1] has a convergent subsequence.

18
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Keep bisecting the range and continuing on in the region with infinitely many terms of the sequence.

Fall 2004 #6. The Bolzano-Weierstrauss Theorem in Rn states that if S is a bounded closed subset of Rn
and (xn ) is a sequence which takes values in S, then (xn ) has a subsequence which converges to a point in
S. Assume this statement known in case n = 1, and use it to prove the statement in case n = 2.

Suppose S is a bounded closed subset of R2 and ((xn , yn )) n=1 is a sequence which takes values in S.
Define Sx and Sy to be the projections of S into the first and second coordinates. Then (xn )n=1 is a sequence
in Sx . By the Bolzano-Weierstrauss Theorem on R, (xn ) n=1 has a subsequence (x n1 )
n=1 which converges to
some x Sx .
By the Bolzano-Weierstrauss Theorem in R, the sequence (yn1 )
n=1 in Sy has a subsequence (yn2 )n=1

which converges to y Sy . It follows that (xn2 )n=1 converges to x in Sx and
2 2
|(xn2 , yn2 ) (x, y)|2 = (|xn2 a| + |yn2 y| )1/2 .

Letting n , the expression on the right approaches 0, hence

((xn2 , yn2 )
n=1

is a subsequence of
((xn , yn ))
n=1

which converges to (x, y). Since S is closed, (x, y) S, so we have shown the Bolzano-Weierstrauss Theorem
holds in R2 .

Spring 2007 #10. Suppose the functions fn (x) on R satisfy:


(i) 0 fn (x) 1 for all x R and n 1.
(ii) fn (x) is increasing in x for every n 1.
(iii) limn fn (x) = f (x) for each x R, where f is continuous on R.
(iv) limx f (x) = 0 and limx f (x) = 1.
Show that fn (x) f (x) uniformly on R.

For any x, y R with x y, by condition (ii),

fn (x) fn (y) 0

for all n 1. Taking the limit as n and using condition (iii)

f (x) f (y) 0.

Thus f is increasing.
Taking the limit as n in condition (i) and using condition (iii), we find

0 f (x) 1

for all x Rn .
Since [a, b] is compact and f is continuous, f is uniformly continuous on [a, b]. Let > 0. Using condition
(iv), select a and b such that f (a) < /2 and f (b) > 1 /2. Then for any x, y < a, since f is increasing,

|f (x) f (y)| f (a) 0 /2 < .

Likewise, if x, y > b, then

|f (x) f (y)| 1 f (b) 1 (1 /2) = /2 < .

19
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Now since [a, b] is compact and f is continuous, f is uniformly continuous on [a, b]. Thus there exists > 0
such that if x, y [a, b] and |x y| < , then |f (x) f (y)| /2. Choose small enough to prohibit x < a
and y > b. If x < a and a < y < b with |x y| , then |y a| , so

|f (x) f (y)| |f (x) f (a)| + |f (a) f (y)| /2 + /2 = .

Thus f is uniformly continuous on R.


Let > 0. Select such that if x, y [a, b] and |x y| , then |f (x) f (y)| /2. Partition [a, b]
into a finite number of intervals of length less than . That is, choose a = x0 x1 xn = b such
that xi+1 xi for all i. Since fn converges to f pointwise, there exists N such that for all n N ,
|f (xi ) fn (xi )| /4 for all i. For any y [xi , xi+1 ] and any n N ,

|fn (y) f (y)| |fn (y) fn (xi )| + |fn (xi ) f (xi )| + |f (xi ) f (y)|

(f (xi+1 ) /4) (f (xi ) /4) + |fn (xi ) f (xi )| + |f (xi ) f (y)|


+ /4 + /2 < 2.
Thus the fn converge to f uniformly on [a, b].
By the choice of a, for any y < a, and any n N ,

|fn (y) f (y)| fn (a) 0 f (a) + /4 /2 + /4 < .

Likewise, for any y > b, and any n N ,

|fn (y) f (y)| .

Thus (fn )
n=1 converges uniformly to f on R.

Spring 2010 #7. Let {fn } be a sequence of real-valued functions on the line, and assume that there
is a B < such that |fn (x)| B for all n and x. Prove that there is a subsequence {fnk } such that
limk fnk (r) exists for all rational numbers r.

Define (, d) as in Fall 2009 #1, with consisting of function from Q R, and take d to be the
supremum over rational inputs. Then fn |Q for each n. Then (, d) is a metric space.
We see that is totally bounded from the condition |fn (x)| B. Let (fm )m=1 be a sequence in which
converges to f with respect to d. Clearly |f (x)| B for all x Q, so is closed. Thus is sequentially
compact, so there exists a subsequence {fnk }k=1 such that this subsequence converges to some f with
respect to d. It follows that we have pointwise convergence on all rationals, as desired.

Spring 2004 #4. Are there infinite compact subsets of Q? Prove your assertion.

Yes, {1/n : n N} {0} is an infinite compact subset of Q. Clearly this set is bounded by 1, and it
contains its only limit point of 0, hence it is closed. Thus it is compact.

Fall 2008 #2. Suppose that A is a non-empty connected subset of R2 .

(a) Prove that if A is open, then it is path connected.

(b) Is part (a) true if A is closed? Prove your assertion.

(a) Let a A. Define H to be the subset of points in A which can be joined to a by a path in A. Let
K = A \ H.
Let x H. Since A is open, there exists > 0 such that B (x) A. Given any y B (x), there is a
straight line path g in B (x) A connecting x and y. Since x H, there is a path f in A joining a to x.

20
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Thus traversing f and then g forms a path from a to y. It follows that y H, hence B (x) H. Thus H is
open.
Let x K. Since A is open, there exists > 0 such that B (x) A. If any point in B (x) could be
joined to a by a path in A, then so could x, a contradiction. Hence B K, so K is open.
Clearly H K = , H K = A, and a H, so H is non-empty. Since A is connected, we must have
K = , so H = A. Thus A is path connected.
(b) No. Consider the set
1
A = {(x, sin( )) : x (0, 1]} {(0, x) : x [1, 1]}.
x
We show that A is closed and connected, but not path connected. For convenience, let B = {(x, sin( x1 )) :
x (0, 1]} and C = {(0, x) : x [1, 1]}. Then C is the set of limit points of B which are not already in
B. Since C is closed, A = B C is closed. Defining f : (0, 1] R2 by f (x) = (x, sin( x1 )), we see that f is
continuous, thus since (0, 1] is connected B = f ((0, 1]) is connected as well. Clearly C is connected.
Suppose for the sake of contradiction that A = B C is not connected. Then there exist disjoint non-
empty open sets U, V with A U V . Since B and C are connected, we can assume without loss of
generality that B U and C V . Now each element in C is a limit point of B, so every ball centered at
some (0, x) C must contain infinitely many elements of B. Since U is open, U and V are not disjoint, a
contradiction. Hence A is connected.
Now any path in A connecting some point of B with some point of C is a continuous map f : [0, 1] A
with f (0) B. But such a map must have f ([0, 1]) entirely contained within B. Thus there does not exist
a path between an element of B and an element of C.

Spring 2011 #11. Show that a connected subset A R is arcwise connected (= path-connected).

Let A be a connected subset of R. It follows that A is an interval. Let x, y A. Then necessarily


[x, y] A. Define f : [0, 1] [x, y] by f (t) = x + t(y x). Clearly f is continuous, and f (0) = x and
f (1) = y, so f is a path connecting x and y. Hence A is path-connected.

Spring 2004 #6. Let || || be any norm on Rn .

(a) Prove that there exists a constant d with ||x|| d||x||2 for all x Rn , and use this to show that
N (x) = ||x|| is continuous in the usual topology on Rn .

(b) Prove that there exists a constant c with ||x|| c||x||2 . (Hint: use the fact that N is continuous on the
sphere {x : ||x||2 = 1}).

(c) Show that if L is an n-dimensional subspace of an arbitrary normed vector space V , then L is closed.
pP P
(a) Define d = ||ei ||2 . Write x = xi ei . By the Cauchy-Schwarz inequality,
X X X
||x|| = || xi ei || ||xi ei || = |xi |||ei ||
qX qX
= (|x1 |, . . . , |xn |) (||e1 ||, . . . , ||en ||) x2i ||ei ||2 = d||x||2 .

Let > 0. For any x, y Rn with ||x y||2 /(d + 1), by the reverse triangle inequality,

||x|| ||y|| ||x y|| d||x y||2 d(/(d + 1)) .

By symmetry, we conclude
|||x|| ||y||| .
Thus N is continuous with respect to the usual topology on Rn .

21
UCLA Basic Exam Problems and Solutions Brent Woodhouse

(b) It follows from (a) that N is continuous on the sphere T := {x : ||x||2 = 1}. Since T is a compact
x
set, N achieves its minimum c on T . For any x Rn with x 6= 0, since ||x||2
T,
x
||x|| = ||x||2 || || c||x||2 .
||x||2

(c) Since V is n-dimensional, there exists an isomorphism T : Rn V . Define || || : Rn R by


||x|| = ||T x||V .
From parts (a) and (b), there exist constants c and d such that for any x Rn ,

c||x||2 ||x|| d||x||2 .

Let (yk )
k=1 be a sequence of points in L that converges to some v V with respect to the norm || ||V of
V . In particular, (yk )
k=1 is a Cauchy sequence with respect to || ||V . It follows that

c||T 1 yi T 1 yj ||2 ||T 1 yi T 1 yj || = ||yi yj ||V d||T 1 yi T 1 yj ||2 .

Let > 0. There exists N such that for all i, j N , ||yi yj ||V c. It follows that

||T 1 yi T 1 yj ||2 .

Hence (T 1 yk )
k=1 is a Cauchy sequence with respect to || ||2 . Since this norm is complete, there exists some
T 1 y Rn such that (T 1 (yn ))
n=1 converges to T
1
(y). It follows that (yk )
k=1 converges to y with respect
to || ||V (using the other side of the inequality above). Hence L is closed.

Fall 2005 #8. For a real n n matrix A, let TA : Rn Rn be the associated linear mapping. Set
||A|| = supxRn {||TA x|| : ||x|| = 1} (here ||x|| = usual euclidean norm, i.e.,

||(x1 , . . . , xn )|| = (x21 + + x2n )1/2 ).

(a) Prove that ||A + B|| ||A|| + ||B||.

(b) Use part (a) to check that the set M of all n n matrices is a metric space if the distance function d is
defined by
d(A, B) = ||B A||.

(c) Prove that M is a complete metric space with this distance function. (Suggestion: The ij-th element
of A =< TA ej , ei >, here ei = (0, . . . , 1, . . . , 0), with a 1 in the ith position.)

(a) By definition of the matrix norm, for all x Rn , it follows that

||TA x|| ||A||||x||.

Now let x Rn with ||x|| = 1. Then

||TA+B x|| = ||TA x + TB x|| ||TA x|| + ||TB x|| ||A|| ||x|| + ||B|| ||x||

= (||A|| + ||B||) ||x|| ||A|| + ||B||.


Thus taking the supremum over all such x,

||A + B|| ||A|| + ||B||.

(b) Clearly this distance function is non-negative and symmetric. If d(A, B) = 0, then ||B A|| = 0. In
particular, this implies that ||(TB TA )(ei )|| = 0 for all i, so TB TA = 0. This implies that A = B.

22
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Finally, for matrices A, B, C, by part (a),

d(A, B) = ||B A|| ||(B C) + (C A)|| ||B C|| + ||C A|| = d(B, C) + d(C, A).

Thus d is a metric on real n n matrices.

(c) Suppose (An ) n=1 is a Cauchy sequence with respect to the matrix norm. Thus there exists N such
that for all j, k N ,
||Aj Ak || .
This implies that for all j, k N ,
n
X
(TAj TAk )2`i = ||(TAj TAk )ei || .
`=1

Taking < 1, it follows that each entry of TAj TAk is less than . Thus ((TAn )ij )
n=1 forms a Cauchy
sequence for each i, j. Define a matrix A by

Aij = lim (TAn )ij .


n

It follows that (An )


n=1 converges to A with respect to the matrix norm. Thus the set of n by n matrices M
is a complete metric space with this distance function.

Spring 2005 #10. Consider the set of f : [0, 1] R that obey


Z 1
|f (x) f (y)| |x y| and f (x)dx = 1.
0

Show that this is a compact subset of C([0, 1]).

Call this set of functions S. The first condition clearly implies that S C([0, 1]). We will show that S
is closed, bounded, and equicontinuous, thus by the Arzela-Ascoli theorem, S is compact.
Let (fn )
n=1 be a sequence of functions in S converging to f . We want to show f S. Let > 0. Select
N such that for all n N ,
sup |f (x) fn (x)| /2.
x[0,1]

Then for any x, y [0, 1],

|f (x) f (y)| |f (x) fn (x)| + |fn (x) fn (y)| + |fn (y) f (y)| + |x y|.

Letting 0, we have
|f (x) f (y)| |x y|.
Since (fn )
n=1 converges to f in the supremum norm, they converge uniformly to f . Since the fn are
continuous, it follows that f is continuous. Also, because we have uniform convergence,
Z 1 Z 1 Z 1
f (x)dx = lim fn (x)dx = lim fn (x)dx = lim 1 = 1.
0 0 n n 0 n

Hence f S.
Let f S. Suppose for the sake of contradiction that there exists y [0, 1] such that f (y) 3. Then for
any x [0, 1],
|f (x) f (y)| |x y| 1,
hence f (x) 2. Then
Z 1 Z 1
f (x) 2 = 2,
0 0

23
UCLA Basic Exam Problems and Solutions Brent Woodhouse

a contradiction. Thus f (y) 3 for all y [0, 1]. Likewise, f (y) 3 for all y [0, 1], so f is uniformly
bounded.
Finally, uniform equicontinuity on S follows immediately from the first condition of functions in S.

Spring 2006 #6. Let W be the subset of the space C[0, 1] of real-valued continuous functions on [0, 1]
satisfying the conditions: Z 1
|f (x) f (y)| < |x y| and f (x)2 dx = 1.
0
Edit: The first condition must be strict.

(a) Prove that W is uniformly bounded, i.e., there exists M > 0 such that |f (x)| M for all x [0, 1].
Hint: Show first that |f (0)| 2 for all f W .

(b) Prove that W is a compact subset of C[0, 1] under the sup norm ||f || = supx[0,1] |f (x)|.

Essentially the same proof as the previous exercise. Note fn2 converges uniformly to f 2 .

Fall 2007 #1. Let S be a subset of Rn with the distance function d(x, y) = ((x1 y1 )2 + + (xn yn )2 )1/2
so that (S, d|SS ) is a metric space.

(a) Given y S, is E = {x S : d(x, y) r} a closed set in S?

(b) Is the set E in part (a) contained in the closure of {x S : d(x, y) > r} in S?

(a) Yes. Let x S be a limit point of E. Then there exists a sequence (xn )
n=1 in E converging to x.
We have
d|SS (xn , y) = d(xn , y) r
for each n 1. Let > 0. Select N such that for all n N , d(xn , x) < . It follows that

d(xn , y) d(xn , x) + d(x, y)

thus
d(x, y) d(xn , y) d(xn , x) r .
Since was arbitrary, d(x, y) r. Thus because we assumed x S, it follows that x E. Thus E is a
closed set in S.

(b) No. Let S = {0} {1} {2} R, y = 0, and r = 1. Then

{x S : d(x, y) > r} = {x S : d(x, 0) > 1} = {2}.

The closure of this set in S is {2}, but

E := {x S : d(x, y) r} = {x S : d(x, 0) 1} = {1} {2},

which is not a subset of {2}.

Spring 2007 #12. Let c0 be the normed space of real sequences x = (x1 , x2 , . . .) such that limk xk = 0
with the supremum norm ||x|| = supk |xk |.

(a) Show that c0 is complete.

(b) Is the unit ball {x c0 : ||x|| 1} compact? Prove your answer.

24
UCLA Basic Exam Problems and Solutions Brent Woodhouse

P
(c) Is the set E = {x c0 : k k|xk | 1} compact? Prove your answer.

(a) Let (x(m) )


m=0 be a Cauchy sequence of elements of c0 . Then for any n, m, k N, we have

(m) (n)
|xk xk | ||x(m) x(n) ||
(m)
so that xk is a Cauchy sequence of real numbers for each k and thus converges. Define x so that
(m)
xk = lim xk .
m

We show that x is in c0 . Let > 0. Since x(m) is a Cauchy sequence, let N be an integer such that
m, n N implies
||x(m) x(n) || < /2.
Since x(N ) c0 , choose M such that k M implies
(N )
|xk | < /2.

Then for k M , we have


(N ) (N )
|xk | |xk xk | + |xk | /2 + /2 = .
Hence x c0 .
For m N and k N, we also have
(m) (n) (m) (n)
|xk xk | |xk xk | + |xk xk |.

For large enough n, by the definition of xk , this is small. Since k was arbitrary,

||x x(m) ||

for m N , so x(m) x. Hence c0 is complete.

(b) No. Suppose for the sake of contradiction that {x c0 : ||x|| 1} is compact. For each n N,
(n)
define the sequence x(n) by xk = 1 if k n and 0 if k > n. Then each x(n) c0 , and ||x(n) || = 1,
(n)
so x {x c0 : ||x|| 1}. By sequential compactness and completeness, there exists a subsequence
(x(nk ) )
k=1 which converges to some x {x c0 : ||x|| 1}. Thus as k ,

||x x(nk ) || 0.

In particular, for each j,


(nk )
|xj 1| = lim |xj xj | = 0.
k

Hence xj = 1. But then limj xj = 1, so x


/ c0 , a contradiction.

(n) (n)
(c) Yes. Let x(n) be a sequence in E. Note that |xk | 1 for all k N, so for each k, {xk } has a
convergent subsequence. Thus we may choose a subsequence {y (n,1) }
n=1 of x
(n)
and y1 R such that
(n,1)
lim y = y1 .
n 1

Continuing, as in the proof of the Arzela-Ascoli Theorem, we may construct subsequences y (n) of x(n) and
numbers y1 , y2 , . . . such that
(n)
lim yk = yk
k

for all k. We claim y = (y1 , y2 , . . .) E and that limn y (n) = y.

25
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Fix an integer N . Then


N N
(n)
X X
k|yk | = lim k|yk | 1,
n
k=1 k=1
(n)
since yk E. Letting N , it follows that y E.
Let > 0. Select N large enough so that N1 < . Then choose N1 N such that
(n)
||yk yk || <
(n)
for k = 1, . . . , N and all n N1 . This is valid since limn yk = yk and {1, . . . , N } is finite. Then for
n N1 and k N , we have that
(n) (n)
|yk yk | |yk | + |yk | 1/N + 1/N = 2/N < 2,
(n)
whereas for k < N we have that |yk yk | < by construction. Thus for n N we have that

||y (n) y|| < 2

so that y (n) is a subsequence of x(n) which converges to y. Hence E is sequentially compact and thus
compact.

Spring 2012 #2. Recall that f : [a, b] R is convex if for all x, y [a, b] and [0, 1], f (x + (1 )y)
f (x) + (1 )f (y). Let fn : [a, b] R be convex functions and suppose that f (x) := limn fn (x) exists
at all x [a, b] and is continuous on [a, b]. Prove that fn f uniformly.

Assume for the sake of contradiction that fn does not converge uniformly to f . Let > 0. Pass to a
subsequence where |fn f |sup . Then there exists a sequence (xn ) n=1 with elements in [a, b] such that
|fn (xn ) f (xn )| /2. By compactness of [a, b], there exists a subsequence (also denoted by (xn ) n=1 ) of
(xn )
n=1 which converges to some x [a, b]. We may choose x n to be monotone and without loss of generality,
assume they are non-decreasing.
Now we localize the problem to [a1 , b1 ], where b1 = x. Since f is continuous, there exists c such that

|f (z) c| < /20

for all z [a1 , b1 ] (choose a1 such that this holds).


We will use convexity of the fn to show pointwise convergence fails somewhere. We are given that
fn (a1 ) f (a1 ) and fn (b1 ) f (b1 ) as n . Choose N large enough that

|fn (a1 ) c| /10 and |fn (b1 ) c| /10.

From the construction of xn , for all n,

fn (xn ) f (xn ) /2.

Consider the line from (a1 , c + /10) to (b1 , c /4). This line intersects the horizontal line at c /10
at some point z (a1 , b1 ). By convexity of fn , fn (z) must lie below this line on [a1 , xn ], so choosing n large
enough that z < xn ,
fn (z) < c /10.
Since this holds for each n and fn (z) f (z), taking n yields

f (z) c /10.

But this contradicts |f (z) c| < /20 above. Hence fn does converge uniformly to f .

26
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Fixed point

Spring 2008 #1. Let g C([a, b]), with a g(x) b for all x [a, b]. Prove the following:

(i) g has at least one fixed point p in the interval [a, b].

(ii) If there is a value < 1 such that

|g(x) g(y)| |x y|

for all x, y [a, b], then the fixed point is unique, and the iteration

xn+1 = g(xn )

converges to p for any initial guess x0 [a, b].

(i) Note that g(x) x is continuous on [a, b], non-negative at x = a and non-positive at x = b. Thus by
the intermediate value theorem, there exists some p [a, b] such that g(p) p = 0, or g(p) = p.
(ii) Suppose there exists < 1 such that

|g(x) g(y)| |x y|

for all x, y [a, b]. Suppose both p, q [a, b] were fixed points of g. Then if p 6= q,

|p q| = |g(p) g(q)| |p q| < |p q|,

a contradiction. Hence p = q, so the fixed point of g is unique.


Let x0 [a, b] be an initial guess and consider the iterated sequence

xn+1 = g(xn )

defined for all n 0. We first show that (xn )


n=0 converges. Follow the procedure below to show this is a
Cauchy sequence and converges to a fixed point of g. Since the fixed point of g is unique, it must be that
(xn )
n=0 converges to p.

Fall 2009 #2. (i) Let X be a complete metric space with respect to a distance function d. We say that a
map T : X X is a contraction if for some 0 < < 1 and all x, y X:

d(f (x), f (y)) d(x, y).

Prove that if T is a contradiction then it has a fixed point, i.e., there is an x X such that T (x) = x.

(ii) Using (i) show that given a differentiable function f : R R whose first derivative satisfies f 0 (x) =
2 4
ex ex there exists R with f () = .

(i) Let T : X X be a contraction. We assume X is non-empty, so there exists x0 X. Inductively


define xn+1 = T (xn ) for all n 0. A simple induction shows that for any natural number n,

d(xn+1 , xn ) = d(T n+1 (x0 )), T n (x0 )) n (T (x0 ), x0 ).

Thus for any natural numbers n, m,


m1
X m1
X
d(xn , xn+m ) d(xn+i , xn+i+1 ) n+i d(x0 , T (x0 ))
i=0 i=0

27
UCLA Basic Exam Problems and Solutions Brent Woodhouse

m1
X 1 m n
= n d(x0 , T (x0 )) i = n d(x0 , T (x0 )) d(x0 , T (x0 )).
i=0
1 1
N
Let > 0. Choose N so that 1 d(x0 , T (x0 )) < . Then for any n, m N , we have shown

n N
d(xn , xm ) d(x0 , T (x0 )) d(x0 , T (x0 )) < .
1 1
Thus the sequence (xn )n=0 is a Cauchy sequence. Because X is complete, this sequence converges to some
x X.
Since T is Lipschitz continuous on X, T is continuous on X. Proof: Let = /.
Recall for all n 0 we defined
xn+1 = T (xn ).
Thus taking the limit as n on both sides and using the continuity of T ,

x = lim T (xn ) = T (x ).
n

Hence x is a fixed point of T .


2 4
(ii) Note that ex and ex are even functions and decreasing on [0, ). Thus for any x [0, 1],
2 4 2 4 2 1 2 1
ex ex ex e1 = ex e0 = 1 1/e.
e e
Likewise, for any x [0, 1],
2 4 4 2
(ex ex ) = ex ex 1 1/e.
Thus for any x [0, 1],
2 4
|ex ex | 1 1/e.
2 4
For x > 1, x4 > x2 , hence x4 < x2 , so ex ex > 0.
2 4 2 4 2
|ex ex | = ex ex e1 = 1/e.

Thus for all x R,


2 4
|f 0 (x)| = |ex ex | 1 1/e.
Let x, y R. By the mean value theorem, there exists z (x, y) such that

f (x) f (y) = f 0 (z)(x y).

Thus
|f (x) f (y)| |f 0 (z)||x y| (1 1/e)|x y|.
Thus f is a contraction, so by part (a), there exists R with f () = .

Fall 2011 #1. Let (X, d) be a compact metric space and let f : X X be a map satisfying

d(f (x), f (y)) < d(x, y), for all x, y X with x 6= y.

Prove that there is a unique point x X so that f (x) = x.

Define : X R by (x) = d(x, f (x)). For any x, y X,

(y) = d(y, f (y)) d(y, x) + d(x, f (x)) + d(f (x), f (y)) 2d(x, y) + (x),

thus
(y) (x) 2d(x, y).

28
UCLA Basic Exam Problems and Solutions Brent Woodhouse

By symmetry, this implies


|(y) (x)| 2d(x, y).
Let > 0. Then for any x, y X with d(x, y) < /2.

|(x) (y)| ,

so is a continuous function.
Since X is compact and is continuous, there exists x0 X such that (x0 ) (x) for all x X.
Suppose for the sake of contradiction that (x0 ) 6= 0. Then x0 6= f (x0 ), so

(f (x0 )) = d(f (x0 ), f (f (x0 ))) < d(x0 , f (x0 )) = (x0 ),

a contradiction. Hence (x0 ) = 0, so d(x0 , f (x0 )) = 0, and f (x0 ) = x0 . Thus f has a fixed point.
Suppose there exist x, y X with f (x) = x and f (y) = y. If x 6= y, then

d(x, y) = d(f (x), f (y)) < d(x, y),

a contradiction. Hence x = y, so there is a unique point x X so that f (x) = x.

Spring 2011 #12. Fall 2004 #5; Fall 2003 #7; Fall 2001 #3. (see text). Given a metric space
M , and a constant 0 < r < 1, a continuous function T : M M is said to be an r-contraction if it is a
continuous map and d(T (x), T (y)) < rd(x, y) for all x 6= y. A well-known fixed point theorem states that
if M is complete and T an r-contraction, then it must have a unique fixed point (dont prove this). This
result is often used to prove the existence of solutions of differential equations with initial conditions.

1. Illustrate this technique for the (trivial) case

f 0 (t) = f (t), f (0) = 1

by letting M be the space of continuous functions C([0, c]) for 0 < c < 1 with the uniform distance

d(f, g) = sup{|f (t) g(t)|},


Rx
and defining (T f )(x) = 1 + 0
f (t)dt. Carefully explain your steps.

2. What approximations do you obtain from the sequence

T (0), T 2 (0), T 3 (0), . . .?

1. Let 0 < c < 1 and M be the space of continuous functions C([0, c]) with the distance

d(f, g) = sup {|f (t) g(t)|}.


t[0,c]
Rx
Define T : M M by (T f )(x) = 1 + 0 f (t)dt. For some f M , since f is continuous on a compact
set there exists B > 0 such that |f | B on [0, c]. Let > 0. Then for any x, y [0, c] with x y and
|x y| /B,
Z y Z y
|(T f )(y) (T f )(x)| = | f (t)dt| Bdt = B(y x) B(/B) = .
x x

Hence T f is continuous, so T f M , thus T is well-defined.


It is well known that M is complete.
Let (fn )
n=1 be a Cauchy sequence in (M, d). Let > 0. Then there exists an integer N 1 such that
for all n, m N ,
d(fn , fm ) .

29
UCLA Basic Exam Problems and Solutions Brent Woodhouse

In particular, for any t [0, c],


|fn (t) fm (t)| .
Hence (fn (t))
n=1 is a Cauchy sequence for each t [0, c]. Since R is complete, we can define f : [0, c] R
by
f (t) = lim fn (t).
n

We now show that (fn )


n=1 converge to f uniformly with respect to d. Then since the limit of a sequence
of continuous functions that converges uniformly is a continuous function, f is continuous, so f M . Thus
we will have shown that M is complete.
Let > 0. Choose N so that for all n, m N , d(fm , fn ) . Then for any t [0, c],

|f (t), fn (t)| |f (t) fm (t)| + |fm (t) fn (t)| |f (t) fm (t)| + d(fm , fn ) |f (t) fm (t)| + .

Letting m ,
|f (t), fn (t)| .
Taking the sup of both sides over all t [0, c],

d(f, fn ) .

Hence (fn )
n=1 converges to f uniformly with respect to d.
Let > 0. Select N so that d(fN , f ) /3. Since fN is continuous, there exists > 0 so that if
|x y| , then |fN (x) fN (y)| /3. For any x, y [0, c] with |x y| ,

|f (x) f (y)| |f (x) fn (x)| + |fn (x) fn (y)| + |fn (y) f (y)| 2d(f, fn ) + |fn (x) fn (y)| 2/3 + /3 = .

Hence f is continuous.
We now show T is continuous. Let > 0 and suppose f, g M with d(f, g) /c. Then
Z t
d(T f, T g) = sup |(T f )(t) (T g)(t)| = sup | f (t) g(t)dt| c sup (f g) = c(d(f, g)) c(/c) = .
t[0,c] t[0,c] 0 t[0,c]

Thus T is continuous.
Let f, g M with f 6= g. It follows that
 Z t 

d(T f, T g) = sup {|(T f )(t) (T g)(t)|} = sup
f (z) g(z)dz c sup |f (z) g(z)| = c(d(f, g)).
t[0,c] t[0,c] 0 z[0,c]

Thus T is a (c + )-contraction mapping. (Choose so that c + < 1.) By the given fixed point theorem,
there exists a fixed point F of T .
We now show F satisfies the differential equation and initial condition. Clearly
Z 0
F (0) = (T F )(0) = 1 + F (t)dt = 1.
0

By the fundamental theorem of calculus, for any x [0, c],


Z x
0 0 d
F (x) = (T F ) (x) = (1 + F (t)dt) = F (x).
dx 0

Thus F 0 (x) = F (x) for all x [0, c]. Hence F is a solution to the given differential equation on [0, c].

2. Here we start with the constant 0 function, which is in M , and iterate T . The proof of the given
fixed point theorem shows that iterating T on any function in M will converge to the fixed point. Hence the
sequence T (0), T 2 (0), T 3 (0), . . . converges to the solution F found in part 1.
As part of the proof, we obtain that for any natural numbers n, m,
1 cm
d(T n (0), T n+m (0)) cn d(T (0), 0).
1c

30
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Thus
1 cm
d(F, T n (0)) d(F, T n+m (0)) + d(T n+m (0), T n (0)) d(F, T n+m (0)) + cn d(T (0), 0).
1c
Note that d(T (0), 0) = d(1, 0) = 1. Letting m ,

cn cn
d(F, T n (0)) d(T (0), 0) = .
1c 1c
Thus for any t T ,
cn
|F (t) (T n (0))(t)| .
1c

Spring 2007 #7. Let f : R R be a twice continuously differentiable function with f 00 uniformly bounded,
and with a simple root at x (i.e., f (x ) = 0, f 0 (x ) 6= 0). Consider the fixed point iteration

f (x)
xn = F (xn1 ) where F (x) = x .
f 0 (x)

Show that if x0 is sufficiently close to x , then there exists a constant C so that for all n,

|xn x | C|xn1 x |2 .

Note f 0 is continuous, so it is bounded away from zero in some open neighborhood U . Say 0 < C <
0
f (x) < D there, without loss of generality. Now suppose x0 U and by induction that x1 , . . . , xn1 U .
By the mean value theorem, there exists yn1 between x and xn1 such that

|xn x | = |F (xn1 ) x | = |F (xn1 F (x )| = |xn1 x ||F 0 (yn1 )|.

Now
f (x)f 00 (x)
F 0 (x) = .
(f 0 (x))2
Thus
|f (yn1 )||f 00 (yn1 )|
|xn x | = |xn1 x | .
|f 0 (yn1 )|2
Apply the mean value theorem again to obtain zn1 between x and yn1 such that f (yn1 ) f (x ) =
(y x )f 0 (zn1 ). Since f (x ) = 0, we obtain

|yn1 x ||f 0 (zn1 )||f 00 (yn1 )|


|xn x | = |xn1 x | .
|f 0 (yn1 )|2

Note yn1 , zn1 U . Thus


|f 0 (zn1 )| D
2.
|f 0 (yn1 )|2 C
Say the second derivative is uniformly bounded by M . It follows that
DM
|xn x | |xn1 x |2 .
C2
C2
If we forced |xn1 x | MD , then |xn x | |xn1 x |, hence xn U , completing the induction.

31
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Inverse Function Theorem, Implicit Function Theorem

Inverse Function Theorem. Let f : U Rn Rn be a continuously differentiable function which has


invertible derivative at a point p (the Jacobian determinant of f at p is non-zero). Then the inverse function
f 1 exists and is continuously differentiable in some neighborhood of f (p). Also,
Jf 1 (f (p)) = [Jf (p)]1 .

Implicit Function Theorem. Let f : Rn+m Rm be a continuously differentiable function, and let Rn+m
have coordinates (x, y). Fix a point (a, b) = (a1 , . . . , an , b1 , . . . , bm ) with f (a, b) = c, where c Rm . If the
matrix [(fi /yj )(a, b)] is invertible, then there exists an open set U containing a, an open set V containing
b, and a unique continuously differentiable function g : U V such that
{(x, g(x)) : x U } = {(x, y) U V : f (x, y) = c}.

Fall 2001 #6. Suppose that F : R2 R2 is a continuously differentiable function with F ((0, 0)) = (0, 0)
fi
and with the Jacobian of F at (0, 0) equal to the identity matrix (i.e., if F = (f1 , f2 ) then xj
|(0,0) = 1 if
2 2
i = j and = 0 if i 6= j). Outline a proof that there exists > 0 such that if a + b < , then there is a
point (x, y) in R2 with F (x, y) = (a, b). (Your argument will be part of the proof of the Inverse Function
Theorem. You may use any basic estimation you need about the change in F being approximated by the
differential of F without proof.)

See standard proof of the Inverse Function Theorem.

Fall 2004 #7. Observe that the point P = (1, 1, 1) belongs to the set S of points in R3 satisfying the
equations
x4 y 2 + x2 z + yz 2 = 3.
Explain carefully how, in this case, the Implicit Function Theorem allows us to conclude that there exists
a differentiable function g(x, y) such that (x, y, g(x, y)) lie in S for all (x, y) in a small open set containing
(1, 1).

Define f : R3 R by f (x, y, z) = x2 y 3 + x3 z + 2yz 4 . Here n = 2 and m = 1. Here a = (1, 1) and


b = 1. Note that f (a, b) = (1, 1, 1) = 3 and f is continuously differentiable. The relevant matrix of partial
derivatives is
[(f /z)(a, b)] = [a31 + 8a2 b3 ] = [9]
which is clearly invertible. Thus by the Implicit Function Theorem, there exists an open set U containing
a = (1, 1), an open set V containing b = 1 and a unique continuously differentiable function g : U V such
that
{(x, y, g(x, y)) : (x, y) U } = {(x, y, z) U V : f (x, y, z) = 3} {(x, y, z) : f (x, y, z) = 3} =: S.
Thus (x, y, g(x, y)) lie in S for all (x, y) U , where U is a small open set containing (1, 1).

Spring 2006 #4. Instead use


x2 y 3 + x3 z + 2yz 4 = 4.
Prove that there exists a differentiable function g(x, y) defined in an open neighborhood N of (1, 1) in R2
such that g(1, 1) = 1 and (x, y, g(x, y)) lies in S for all (x, y) N .

Define f (x, y, z) = x2 y 3 + x3 z + 2yz 4 . Note f (1, 1) = 4 and f is continuously differentiable. The matrix
of partial derivatives under consideration is
[(f /z)(1, 1, 1)] = [[(x, y, z) 7 (x3 + 6yz 3 )](1, 1, 1)] = [7],

32
UCLA Basic Exam Problems and Solutions Brent Woodhouse

which is clearly invertible. Hence the Implicit Function Theorem guarantees the existence of an open set
N containing (1, 1), an open set V containing 1, and a unique continuously differentiable function g(x, y)
defined in U such that

{(x, y, g(x, y)) : (x, y) N } = {(x, y, z) N V : f (x, y, z) = 4} {(x, y, z) : f (x, y, z) = 4} =: S.

Since f (1, 1, 1) = 4, then g(1, 1) = 1.

Spring 2007 #11. (a) Consider the equations

u3 + xv y = 0, v 3 + yu x = 0.

Can these equations be solved uniquely for u, v in terms of x, y in a neighborhood of x = 0, y = 1, u = 1,


v = 1? Explain your answer.

(b) Give an example in which the conclusion of the implicit function theorem is true, but the hypothesis is
not.

(a) Define f : R4 R2 by f1 (x, y, u, v) = u3 + xv y and f2 (x, y, u, v) = v 3 + yu x. Note that


f (0, 1, 1, 1) = (0, 0) and f is continuously differentiable. The matrix of partial derivatives under consider-
ation is  f1 f1
  
u (0, 1, 1, 1)
[(fi /uj )(0, 1, 1, 1)] = f v (0, 1, 1, 1) =
3 0
,
f2
u (0, 1, 1, 1)
2
v (0, 1, 1, 1)
1 3
which has determinant 9 and is thus invertible. Hence the implicit function theorem guarantees the existence
of an open set U containing (0, 1), an open set V containing (1, 1), and a uniquely continuously differentiable
function g(x, y) such that

{(x, y, g1 (x, y), g2 (x, y)) : (x, y) U } = {(x, y, u, v) U V : f (x, y, u, v) = 0}.

Thus the equations can be solved uniquely in a neighborhood of x = 0, y = 1, u = 1, v = 1.

(b) Consider solving F (x, y) = y 3 x for y near (0, 0). The known solution is y = x1/3 , but the implicit
function theorem fails since the generated matrix is singular.

Spring 2010 #9. Assume that f (x, y, z) is a real-valued, continuously differentiable function such that
f (x0 , y0 , z0 ) = 0. If f (x0 , y0 , z0 ) 6= 0, show that there is a differentiable surface, given parametrically by
(x(s, t), y(s, t), z(s, t)) with (x(0, 0), y(0, 0), z(0, 0)) = (x0 , y0 , z0 ), on which f = 0.

Suppose without loss of generality that f /z(x0 , y0 , z0 ) 6= 0. Then the Implicit Function Theorem
guarantees the existence of an open set U R2 containing (x0 , y0 ), an open set V R containing z0 , and a
continuously differentiable function g : U V such that

{(x, y, g(x, y)) : (x, y) U } = {(x, y, z) : f (x, y, z) = 0}.

Set s = x and t = y. Thus taking x(s, t) = s, y(s, t) = t, and z(s, t) = g(s, t) = g(x, y), it follows that

{(x(s, t), y(s, t), z(s, t)) : (s, t) U }

is a differentiable surface with (x(0, 0), y(0, 0), z(0, 0)) = (x0 , y0 , z0 ) on which f = 0.

Fall 2002 #6. Suppose F : R3 R2 is continuously differentiable. Suppose for some v0 R3 and x0 R2
that F (v0 ) = x0 and F 0 (v0 ) : R3 R2 is onto. Show that there is a continuously differentiable function
: (, ) R3 for some > 0, such that
(i) 0 (0) 6= 0 R3 , and

33
UCLA Basic Exam Problems and Solutions Brent Woodhouse

(ii) F ((t)) = x0 for all t (, ).

This problem is an immediate consequence of the implicit function theorem. Without loss of generality,
assume x0 = (0, 0). Let v0 = (v1 , v2 , v3 ). Let M be the matrix representation of F 0 (v0 ). Since F 0 (v0 ) is
surjective, two columns of M are linearly independent. Assume without loss of generality that these are the
last two columns of M . Then the matrix consisting of the last two columns of M is invertible, so we can apply
the inverse function theorem. Thus there are continuously differentiable functions f, g : (v1 , v1 + ) R
such that
F (t, f (t), g(t)) = 0
for all t (v1 , v1 + ). Defining (t) = (v1 + t, f (v1 + t), g(v1 + t)) for all t (, ). Then F ((t)) =
(0, 0) = x0 for all t (, ). Also, the final entry of 0 (0) is 1, so it is non-zero.

Spring 2002 #6. Suppose f : R3 R is a continuously differentiable function with grad f 6= 0 at 0 R3 .


Show that there are two other continuously differentiable functions g : R3 R, h : R3 R such that the
function
(x, y, z) (f (x, y, z), g(x, y, z), h(x, y, z))
from R3 to R3 is one-to-one on some neighborhood of 0.

Since f (0) is nonzero, {f (0)} is linearly independent and may be extended with two vectors v =
(v1 , v2 , v3 ) and w = (w1 , w2 , w3 ) to a basis for R3 . Define g(x, y, z) = (x, y, z) (v1 , v2 , v3 ) and h(x, y, z) =
(x, y, z) (w1 , w2 , w3 ) so that g = v and h = w. In particular, the derivative of (f, g, h) at 0 has linearly
independent columns f (0), v, and w, so it is invertible. The Inverse Function Theorem then guarantees
that (f, g, h) is invertible in a neighborhood of 0 (hence one-to-one).

Fall 2005 #4. Suppose F : [0, 1] [0, 1] is a C 2 function with F (0) = 0, F (1) = 0, and F 00 (x) < 0 for
all x [0, 1]. Prove
that the arc length of the curve {(x, F (x)) : x [0, 1]} is less than 3. (Suggestion:
Remember that a2 + b2 < |a| + |b| when you are looking at the arc length formula - and at a picture of
what {(x, F (x)} could look like.)

The arc length of the curve {(x, F (x)) : x [0, 1]} is given by
Z 1 p Z 1 Z 1
1 + (F 0 (x))2 dx < 1 + |F 0 (x)|dx = 1 + |F 0 (x)|dx.
0 0 0

Since F 00 (x) < 0 for all x [0, 1], F 0 is decreasing on [0, 1]. From F (0) = F (1) = 0, we conclude that F
increases until it reaches its maximum, then decreases back to 0. Thus splitting the integral into two parts,
to the left and right of the maximum occuring at c (0, 1),
Z 1 Z c Z 1 Z c Z 1
|F 0 (x)|dx = |F 0 (x)|dx+ |F 0 (x)|dx = F 0 (x)dx F 0 (x)dx = F (c)F (0)(F (1)F (c)) = 2F (c).
0 0 c 0 c

By hypothesis, F (c) 1. Thus the arc length is less than 3.

34
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Infinite sequences and series

Spring 2009 #1. Set a1 = 0 and define a sequence {an } via the recurrence

an+1 = 6 + an for all n 1.

Show that this sequence converges and determine the limiting value.

We first show by induction that 0 an an+1 3 for all n 0. For the base case n = 0, a0 = 0
6, so 0 a0 an+1 3. Now assume
and a1 = inductively
that 0 an an+1 3. It follows that
an+2 = 6 + an+1 6 + 3 = 3. Also, an+2 6 + 0 = 6 0. Finally,
p
an+2 = 6 + an+1 6 + an = an+1 .

Thus 0 an+1 an+2 3, completing the induction. Hence the sequence (an )
n=0 is increasing and bounded
above by 3, so L := limn an exists.
Sending n in
an+1 = 6 + an
and using that the square root function is continuous,

L = 6 + L,

hence L2 L 6 = 0, and L = 3 or L = 2. Clearly L 0, so we conclude L = 3.

Fall 2011 #4. Compute



X (1)n
.
n=1
n2

We start from the well-known sum



X 1 2
= .
n=1
n2 6
This implies that the given series is absolutely convergent. We can then derive its sum.

X 1 1X 1 2
= = .
n=1
(2n)2 4 n=1 n2 24

Thus

X 1 X 1 X 1 2 2 3 2
2
= 2
2
= = .
n=1
(2n 1) n=1
n n=1
(2n) 6 24 24
Therefore,

X (1)n X 1 X 1 2 3 2 2
= = = .
n=1
n2 n=1
(2n)2 n=1 (2n 1)2 24 24 12

Spring 2013 #11. Define the Fibonacci sequence Fn by F0 = 0, F1 = 1, and recursively, Fn = Fn1 +Fn2
for n = 2, 3, 4, . . ..

(a) Show that the limit as n goes to infinity of Fn /Fn1 exists and find its value.

2
(b) Prove that F2n+1 F2n1 F2n = 1 for all n 1.

(a) The Fibonacci recurrence is Fn+2 Fn+1 Fn = 0 for all n 0, F0 = 0, F1 = 1.

35
UCLA Basic Exam Problems and Solutions Brent Woodhouse

We guess that a solution takes the form Fn = n . This implies 2 1 = 0, so we obtain solutions
1 1
1 = (1 + 5) and 2 = (1 5).
2 2
The general solution is a linear combination of these. Using the initial conditions, we derive
1
Fn = (n1 n2 ).
5
This implies
n1 n2
Fn /Fn1 = n1 .
1 n1
2
As n , since 2 < 1, n2 0, thus
1
Fn /Fn1 n1 /n1
1 = 1 = 1 + 5.
2
(b) This can be shown by a simple induction (easiest in the form Fn+1 Fn1 Fn2 = (1)n ).

Winter 2006 #1. Show that for each > 0 there exists a sequence of intervals (In ) with the properties

[
X
Q In and |In | < .
n=1 n=1

S
Enumerate the rationals with the sequence (rn )
n=1 . Define In = [rn , rn + /2
n+1
]. Then Q n=1 In
and

X
X
|In | = /2n+1 = /2 < ,
n=1 n=1
as desired.

Spring 2003 #2. Prove: If a1 , a2 , a3 , . . . is a sequence of real numbers with


+
X
|aj | < +,
j=1
PN
then limN + j=1 aj exists.

For each j 1, define pj = 21 (|aj | aj ) and qj = 12 (|aj | aj ). Then pj + qj = |aj | and pj qj = aj


for each j. Note alsoP that 0 pj P|a j | and 0 qj |aj | for each j. Thus by the squeeze theorem, since
P
j=1 |aj | converges, j=1 pj and j=1 qj converge. It follows that


X
X
an = (pj qj )
j=1 j=1

converges to

X
X
pj qj ,
j=1 j=1
PN P
so limN + j=1 aj = j=1 aj exists.

P
Spring
P 2010 #11. Suppose n=1 |an |P < . Let be a one-to-one mapping ofPN onto N. The series

n=1 a(n) is called a rearrangement of n=1 an . Prove that all rearrangements of n=1 an are convergent
and have the same sum.

36
UCLA Basic Exam Problems and Solutions Brent Woodhouse

P P
By the above exercise, n=1 an is convergent. Define S := n=1 an . Let > 0. It follows that
PN
( n=1 an )
N =1 is a Cauchy sequence, thus there exists N such that for all M N ,
M
X M
X N
X
| an | = | an an | /2.
n=N +1 n=1 n=1

In addition, increase N as necessary such that


N
X
| an S| /2.
n=1

Now choose N 0 sufficiently large that {1, . . . , N } {a(1) , . . . , a(N 0 ) }. Fix m N 0 . Let

M = max{(k) : 1 k m}.

It follows that
N
X m
X M
X
| an a(k) | | |an || /2.
n=1 k=1 n=N +1

Then
m
X m
X N
X N
X
| a(k) S| | a(k) an | + | an S| /2 + /2 = .
k=1 k=1 n=1 n=1
P
Hence k=1 a(k) exists and equals S.

Fall 2001 #2; Fall 2008 #5. Let N denote the positive integers, let an = (1)n n1 , and let be any real
number. Prove there is a one-to-one and onto mapping : N N such that

X
a(n) = .
n=1

We proceed for a general series which is conditionally convergent but not absolutely convergent. This
series is conditionally convergent by the alternating series test, but not absolutely convergent. This is clear
from the comparsion

X 1 1 1 1 1 1 1
|an | = + + + + + + + ,
n=1
1 2 3 1 2 4 4

since the sum on the right is 1 + 12 + 12 + , which does not converge.


Define pj = 12 (|aPj | + aj ) and Pqj = 12 (|aj | aj ). Then
Ppj = aj P if aj is non-negative, and qj = aj if aj

is negative. If both j=1 pj and j=1 converge, then j=1 aj = j=1 (pj qj ) converges, a contradiction.
P P P P
Thus either j=1 pj or j=1 qj diverges. Suppose the former. If q converges, then j=1 pj =
P P P j=1 j
j=1 (aj + qj ) converges, a contradiction. Hence both j=1 p j and j=1 qj converge.
Reindex pj and qj to eliminate P the 0 terms which do not correspond with some aj and preceed a term
P
corresponding to some aj . Clearly j=1 pj and j=1 qj still diverge.
P
Suppose > 0 (the other cases are similar). Let Pj and Qj , j 1 be the partial sums of j=1 pj
P P
and j=1 qj respectively. Select the smallest N1 such that PN1 > . (Such an N1 exists since j=1 pj is
divergent and positive.) Then select the smallest N2 such that PN1 QN2 < . Next, select N3 > N1 such
that PN3 QN2 > . By the zero test for sequences, pj and qj approach 0 as j . Thus it follows that
when continuing this procedure,
PNk QNk+1
approaches . Define the bijection : N N such that a(1) = p1 , . . . , a(N1 ) = pN1 , a(N1 +1) =
PN P
q1 , . . . , a(N2 ) = qN2 , . . .. Then by construction, j=1 a(j) converges to as N , so n=1 a(n) = .

37
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Fall 2003 #3. Prove that the sequence a1 , a2 , . . . with


 n
1
an = 1 +
n
converges as n .
P P
By comparing k=1 1
k! to k=1 2k+1 , which converges, we see that the former series converges. By the
binomial theorem,
 n Xn    k
1 n 1
an = 1+ =
n k n
k=1
n
X 1 n(n 1) (n k + 1)
=
k! nk
k=1
n    Xn
X 1 1 k1 1
= 1 1 .
k! n n k!
k=1 k=1
P 1
Taking the limit as n , we see that limn an exists and does not exceed k=1 k! .

Fall 2004 #1; Spring 2005 #9. Consider the following two statements:
(A) The sequence (an ) converges.
(B) The sequence ((a1 + a2 + + an )/n) converges.
Does (A) imply (B)? Does (B) imply (A)? Prove your answers. (Summer 2005 #9 indicates that (A) implies
(B) but (B) does not imply (A) in general).

The sequence an = (1)n converges in the sense of (B) but not in the sense of (A). Thus (B) does not
imply (A).
We show that (A) implies (B). Suppose the sequence (an ) n=1 converges to s. Let > 0. Select N such
that for all n N , |an s| /2. Select N 0 > N such that

|a1 s| + + |aN s|
/2.
N0
Then for any n N 0 ,
a1 + + an |a1 s| + + |aN s| |aN +1 s| + + |an s|
| s| = +
n n n
|a1 s| + + |aN s| (n N )
+ /2 + /2 = .
N0 2n
a1 ++an
Thus n converges to s.

PN
Spring 2012 #4. For a sequence {an } of non-negative numbers, let sn := n=1 an and suppose that sn
tends to a number s R in the Cesaro sense:
s1 + + sn
s = lim .
n n
P
Show that k=1 ak exists and equals s.
P
Suppose for the sake of contradiction that k=1 ak diverges. Then Pn since all ak are positive, it must
diverge to +. Thus there exists N such that for all n N , sn = k=1 an 10s. Then for any positive
integer M ,
s1 + + sN +M s1 + + sN sN +1 + + sN +M (10s)M
= + .
N +M N +M N +M N

38
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Letting MP , this means s is as large as desired, a contradiction.



Thus k=1 ak converges. By the previous problem,

s1 + + sn X
s = lim = ak .
n n
k=1

Spring 2004 #1. Let S denote the set of sequences a = (a1 , a2 , . . .), with ak = 0 or 1. Show that the
mapping : S R defined by
a1 a2
((a1 , a2 , . . .)) = + +
10 102
is an injection. Include an explanation of why the infinite series converges. Hint: if a 6= b, you may assume
that
a = (a1 , . . . , an1 , 0, an+1 , . . .),
b = (b1 , . . . , bn1 , 1, bn+1 , . . .).

For any a,
a1 a2 1 12 1 9 1
(a) = + 2 + + = / = ,
10 10 10 10 10 10 9
thus (a) is finite.
Suppose a 6= b. Assume without loss of generality that

a = (a1 , . . . , an1 , 0, an+1 , . . .),

b = (b1 , . . . , bn1 , 1, bn+1 , . . .).


Then

X bi ai X 1 1 1 1
|(b) (a)| 10n + i
10 n
i
= 10n n+1 / = n+1 .
i=n+1
10 i=n+1
10 10 9 10

Hence (b) 6= (a), so is injective.


P
Spring 2006 #2. Let F (x) = n=0 an xn be a power series with an R. Show that there exists a unique
number 0 such that F (x) converges if |x| < and F (x) diverges if |x| > .

Suppose R such that F () converges. We first show that F (x) converges for all |x| < ||. Let x R
with |x| < . Since a convergent sequence is bounded, there exists B such that |an n | B for all n. Define
= |x|/||. Then

X
X
X
an |x|n an n ||n B n ,
n=0 n=0 n=0

which converges, thus F (x) converges.


Let = sup{x 0 : F (x) converges}. Clearly 0 is in this set, so the supremum is non-negative. By the
above result, F (x) converges if |x| < . If = +, then F converges everywhere. Otherwise, is some
finite non-negative real number. Suppose for the sake of contradiction that F (x) converges for some x with
|x| > . Then |x| is a member of {x 0 : F (x) converges }, but |x| > , a contradiction. Hence F (x)
diverges if |x| > .
P
Winter 2006 #2. Let (an )n1 be a decreasing sequence of positive numbers such that n=1 an = .
Under what condition(s) is the function

X
f (x) = (1)n an xn
n=1

39
UCLA Basic Exam Problems and Solutions Brent Woodhouse

well-defined and left-continuous at x = 1? Carefully prove your assertion.


P n
Suppose limn an = 0. Then by the alternating series test, f (1) converges. Since f (1) = n=1 (1) an
converges, this series is Abel-summable, so f is left-continuous at x = 1.
P
Fall 2007 #8. Suppose an 0 and n=1 an = . Does it follow that

X an
= ?
n=1
1 + an

Prove your answer.

Yes. Suppose first that L := lim supn an 6= 0. Since the an are non-negative, L > 0. Let N be a
positive integer. Then there exists n > N such that an > L/2. It follows that
an
an > L/2.
1 + an
an
Hence the limit of 1+a n
as n either does not exist, or does not equal 0, so the sum in question does
not converge. Since the summands are positive, the series must diverge to +.
Otherwise, L := lim supn an = 0. Let > 0. Then there exists N such that for all n N , an < .
Thus for all n N ,
an an
> .
1 + an 1+
Note that

X an 1 X
= an = +,
1+ 1+
n=N n=N
P an
thus n=1 1+an must also diverge to +, as desired.

Fall 2010 #4. (a) Show that given a real-valued continuous function f on [0, 1] [0, 1] and an > 0, there
exist real-valued continuous functions g1 , . . . , gn and h1 , . . . , hn on [0, 1] for some finite n 1 so that

X n
f (x, y) gi (x)hi (y) , 0 x, y 1.


i=1

(b) If f (x, y) = f (y, x) for all 0 x, y 1, can this be done with hi = gi for each i? Explain.

(a) Let > 0. Since f is continuous on a compact set, it is uniformly continuous. Thus there exists a
positive integer N such that if |(x, y) (x0 , y 0 )| N1 , then |f (x, y) f (x0 , y 0 )| /2.
Let : R R be given by
1 + x x [1, 0],
(x) = 1 x x [0, 1],
0 otherwise.

Define the real-valued continuous function gi (x) = (N x + i) for 0 i N . Consider the function
N
X
f(x, y) = gi (x)f (i/N, y).
i=0

For any (x, y) [0, 1], it is straightforward to show from continuity of f that

|f (x, y) f(x, y)| /2.

40
UCLA Basic Exam Problems and Solutions Brent Woodhouse

By the Weierstrass Approximation Theorem in 1 dimension, there exists polynomials (real-valued con-
tinuous functions) h0 , . . . , hN such that for each 1 i N and any y [0, 1],

|hi (y) f (i/N, y)| /4N.

It follows that
N
X N
X
|f (x, y) gi (x)hi (y)| |f (x, y) f(x, y)| + |f(x, y) gi (x)hi (y)|
i=0 i=0

N
X
= |f (x, y) f(x, y)| + | gi (x)(f (i/N, y) hi (y))|
i=0

/2 + N (/2N ) = .
Adjust indices as necessary to obtain the desired expression.
Pn Pn
(b) No. Take some f which is negative on (1, 1). Then if gi = hi , i=1 gi (1)hi (1) = i=1 gi2 (1) is always
non-negative, so it cannot come arbitrarily close to f (1, 1). Hence such an approximation is not possible.

Fall 2012 #1. Let {bn } be a sequence of real numbers with bounded partial sums, i.e., there is M <
PNn=1
N , | n=1 bn | M , and let {an }
such that for all P n=1 be a sequence of positive numbers decreasing to 0.
Prove the series an bn converges.
Pn
We use summation by parts. Let sn denote the partial sums of j=1 bj . For any positive integers N, M ,

M
X M
X M
X M
X
aj bj = aj (sj sj1 ) = aj sj aj sj1
j=N j=N j=N j=N

M
X M
X 1
= aj sj aj+1 sj
j=N j=N 1

M
X 1
= aM sM aN sN 1 + (aj aj+1 )sj .
j=N

Thus
M
X M
X 1
| aj bj | M (aM + aN ) + M (aj aj+1 )
j=N j=N

= M (aM + aN + aN aM ) 2M aN .
Letting N , since aN decreases to 0,
M
X
aj bj 0
j=N
P P
as N, M . Hence the partial sums of an bn form a Cauchy sequence, so the series n=1 an bn converges.

Spring 2013 #4. Denote by hn the n-th harmonic number:


1 1 1
hn = 1 + + + + .
2 3 n
Prove that there is a limit
= lim (hn ln n).
n

41
UCLA Basic Exam Problems and Solutions Brent Woodhouse

We show xn := R xhn ln n is decreasing and bounded below, thus the desired limit exists.
Recall ln x = 1 y1 dy for all x 1. Thus hn is an upper Riemann sum for ln n, so hn ln n 0. Thus
xn is bounded below by 0.
For all n 1, let xn = hn ln n. Fix n 1. Applying the mean value theorem to ln, there exists
1
c ( n+1 , n1 ) such that
1 1
ln(n + 1) ln(n) = ((n + 1) n) = .
c c
Thus
1 1 1
xn+1 xn = (ln(n + 1) ln(n)) = < 0.
n+1 n+1 c
Hence xn is decreasing.

Spring 2006 #3. Prove that the series



X sin(nx)
f (x) =
n=1
n5/2

converges for all x R and that f (x) is a continuous function on R with a continuous derivative. State
clearly any facts you assume.
P
We first show that j=1 sin(jx)
j 5/2
converges uniformly. Note that the terms are bounded by j 5/21
, and
P 1
j=1 j 5/2 < by the integral test. Thus the series converges by the Weierstrass M-test to a continuous
function. Also, the derivatives are continuous and the sum of the derivatives converge uniformly by the
Weierstrass M-test (again using the integral test).

42
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Partial Derivatives

Fall 2001 #5; Spring 2002 #5; Winter 2002 # 6; Spring 2003 #6; Fall 2005 #2. Suppose that
f : R2 R is a continuous function such that the partial derivatives f f
x and y exist everywhere and are
   
f f
continuous everywhere, and x y and y x also exist, and are continuous everywhere. Prove that
   
f f
=
x y y x

at every point of R2 .
 
f
Here I only assume y x exists. Define

(h, k) := f (h, k) f (h, 0) f (0, k) + f (0, 0).

Also let g(x) := f (x, k) f (x, 0). Applying the mean value theorem twice, there exist ah (0, h) and
bk (0, k) such that

f f 2f
(h, k) = g(h) g(0) = g 0 (ah )h = ( (ah , k) (ah , 0))h = (ah , bk )hk.
x x yx
2f
Thus since yx is continuous,

(h, k) 2f 2f
lim lim = lim lim (ah , bk ) = (0, 0).
h0 k0 hk h0 k0 yx yx
 
f 2f
Hence x y exists and equals yx (0, 0).

Spring 2008 #5. Fall 2012 #6. (a) Let F (x, y) be a continuous function on the plane such that for every
square S having its sides parallel to the axes,
Z Z
F (x, y)dxdy = 0.
S

Prove F (x, y) = 0 for all (x, y).


   
(b) Assume f (x, y), fx
(x,y)
, y f (x,y)
x , and
x
f (x,y)
y are all continuous in the plane. Use part (a) to
prove that    
f (x, y) f (x, y)
= .
y x x y
R R
Hint: you may assume R Rthe double integral in (a) equals the iterated integral ( F (x, y)dx)dy and equals
the iterated integral ( F (x, y)dy)dx.

Suppose for the sake of contradiction that f (x , y ) > 0. There exists > 0 such that if |(x, y)(x0 , y 0 )|

0 0
, then |f (x, y) f (x , y )| |f (x , y )|/2. Take S with opposite
p corners [x delta/ 2,
y delta/ 2]
and [x + / 2, y + / 2]. Then within S, |(x, y) (x0 , y 0 )| = |x x0 |2 + |y y 0 |2 2 = , hence
|f (x, y) f (x , y )| |f (x , y )|/2, so
f (x , y )
|f (x, y)| f (x , y ) = f (x , y )/2.
2
It follows that Z Z Z Z
f (x, y) f (x , y )/2 > 0,
S S

43
UCLA Basic Exam Problems and Solutions Brent Woodhouse

a contradiction. Thus f (x, y) = 0 for all (x, y).

(b) Let S be a square with its sides parallel to the axes. We compute

2f 2f 2f 2f
Z Z Z Z Z Z
dxdy = dxdy dxdy = 0
S xy yx S xy S yx

where the final step follows from the fundamental theorem of calculus. Thus applying part (a) to the
2f 2f 2f 2f
continuous function xy yx , we find xy yx = 0.

Fall 2002 #5. Suppose f : R2 R has partial derivatives at every point bounded by A > 0.

(a) Show that there is an M > 0 such that

|f ((x, y)) f ((x1 , y1 ))| M ((x x1 )2 + (y y1 )2 )1/2 .

(b) What is the smallest value of M (in terms of A) for which this always works?

(c) Give an example where that value of M makes the inequality an equality.

(a) Using the mean value theorem,

|f (x, y) f (x1 , y1 )| |f (x, y) f (x1 , y)| + |f (x1 , y) f (x1 , y1 )| A|(x, y) (x1 , y)| + A|(x1 , y) (x1 , y1 )|
p
A 2 (x x1 )2 + (y y1 )2 .
Here we use the inequality p
a+b 2 a 2 + b2

for the last inequality. So setting M = A 2, we have the desired inequality.

0 0
(b) & (c) Use the
example f (x, y) = x + y. Here A = 1 and taking (x, y) = (1, 1), (x , y ) = (0, 0), we
obtain the bound A 2. Thus this is strict.

Spring 2003 #4. Consider the following equation for a function F (x, y) on R2 :

2F 2F
2
= .
x y 2

(a) Show that if a function F has the form F (x, y) = f (x + y) + g(x y) where f : R R and g : R R
are twice differentiable, then F satisfies this equation.

(b) Show that if F (x, y) = ax2 +bxy+cy 2 , a, b, c R, satisfies this equation, then F (x, y) = f (x+y)+g(xy)
for some polynomials f and g in one variable.

(a) Straightforward with chain rule.

(b) Then 2a = 2c, so a = c. Write F (x, y) = a(x2 + y 2 ) + bxy = A(x + y)2 + B(x y)2 . Then

a = A + B, b = 2(A B)

can be inverted. Hence we can take f (x + y) = A(x + y)2 and g(x y) = B(x y)2 .

44
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Differentiation

Spring 2004 #5. Suppose that G is an open set in Rn , f : G Rm is a function, and that x0 G.

(a) Carefully define what is meant by f 0 (x0 ) : Rn Rm .

(b) Suppose that I is a line segment in G connecting points p and q such that f 0 (x) is defined for all
x I. Show that if f is differentiable at all the points of I, then for some point c I,

||f (q) f (p)||2 ||f 0 (c)||||q p||2 .

Hint: let w be a unit vector with ||f (q) f (p)||2 = (f (q) f (p)) w.

(a) f 0 (x0 ) is the unique linear map from Rn to Rm such that

||f (x) (f (x0 ) + (f 0 (x0 ))(x x0 ))||


lim = 0.
xx0 ;xG\{x0 } ||x x0 ||

(b) If f (q) = f (p), the desired inequality is obvious. Otherwise, let

f (q) f (p)
w= .
||f (q) f (p)||2

Then
||f (q) f (p)||2 = (f (q) f (p)) w.
Define g : G R by
g(x) := f (x) w.
Then
g(q) g(p) = (f (q) f (p)) w = ||f (q) f (p)||2
and
g 0 (x)(u) = (f 0 (x)(u)) w
for all x I, u Rn . By the mean value theorem from Rn to R, there exists c I such that

g(q) g(p) = g 0 (c)(q p) = (f 0 (c)(q p)) w.

Thus by the Cauchy-Schwarz inequality,

||f (q) f (p)||2 = g(q) g(p) = (f 0 (c)(q p)) ||f 0 (c)||2 ||q p||2 .

Winter 2002 #5; Spring 2009 #7. (a) Let f : U Rk be a function on an open set U Rn . Define
what it means for f to be differentiable at a point x U .

(b) State carefully the Chain Rule for the composition of differentiable functions of several variables.

(c) Prove the Chain Rule you stated in (b).

(a) f is differentiable at x U if there exists a linear map f 0 (x) : Rk Rn such that

||f (y) (f (x) + f 0 (x)(y x))||2


lim = 0.
yx;yU \{x} ||y x||2

45
UCLA Basic Exam Problems and Solutions Brent Woodhouse

(b) Let U Rk , V Rm , and consider f : U V and g : V Rn . Let x0 U and suppose that f is


differentiable at x0 and g is differentiable at g(x0 ). Then g f is differentiable at x0 and as a composition
of linear maps,
(g f )0 (x0 ) = g 0 (f (x0 )) f 0 (x0 ).

(c) Let Lf : Rk Rm and Lg : Rm Rn be the functions

Lf (x) = f (x) + f 0 (x0 )(x x0 ), Lg (x) = g(x) + g 0 (x0 )(x x0 ).

We must show that


||(g f )(x) (Lg Lf )(x)|| ||x x0 ||
for all x in some neighborhood of x0 . By the triangle inequality,

||(g f )(x) (Lg Lf )(x)|| ||(g f )(x) (Lg f )(x)|| + ||(Lg f )(x) (Lg Lf )(x)||.

We shall handle the two terms on the right separately.


For the first term, fix some M > ||f 0 (x0 )||. Since g is differentiable at f (x0 ), there exists r such that if
||y f (x0 )|| r, then

||g(y) Lg (y)|| ||y f (x0 )||.
2M
Since f is differentiable at x0 , it is continuous at x0 , so there exists 1 > 0 such that if ||x x0 || 1 ,
then ||f (x) f (x0 )|| r. Finally, since f is differentiable at x0 , there exist M > 0 and 2 > 0 such that if
||x x0 || 2 , then
||f (x) f (x0 )|| M ||x c||.
Thus for any x with ||x x0 || min(1 , 2 ), taking y = f (x) above,

||(g f )(x) (Lg f )(x)|| ||f (x) f (x0 )|| ||x x0 ||.
2M 2
For the second term, since f is differentiable at x0 , there exists 3 > 0 such that if ||x x0 || < 3 , then

||f (x) Lf (x)|| ||x x0 ||.
2||Lg ||

Thus

||(Lg f )(x) (Lg f )(x0 )|| ||Lg ||||f (x) Lf (x)||
||x x0 ||.
2
Combining the bounds yields the desired inequality to show that g f is differentiable at x0 with derivative
Lg Lf .

Spring 2010 #10. Let f (x, y) be the function defined by


xy
f (x, y) = p
x2 + y 2

when (x, y) 6= (0, 0) with f (0, 0) = 0.

(a) Compute the directional derivatives of f (x, y) at (0, 0) in all directions where they exist.

(b) Is f (x, y) differentiable at (0, 0)? Prove your answer.

(a) Let u = (u1 , u2 ) be a unit vector. Then the directional derivative of f at (0, 0) in the direction of u,
if it exists, is
f (hu) f (0, 0) h2 u1 u2 h
lim = lim = lim u1 u2 .
h0 h h0 h|h| h0 |h|

46
UCLA Basic Exam Problems and Solutions Brent Woodhouse

In particular, the directional derivatives of f along the x and y axes exist and equal 0. But since the
limit for negative h does not equal the limit for positive h otherwise, all other directional derivatives of f do
not exist.

(b) No. Suppose for the sake of contradiction that f is differentiable at (0, 0). Then since the x and y
directional derivatives of f are 0, the derivative of f must be the 0 linear map. For any x, y > 0,
|f (x, y) (f (0, 0) + 0)| |xy|
= 2 .
||(x, y) (0, 0)||2 x + y2
1
Thus approaching (0, 0) by (x, x) as x 0, this quotient is 2, which does not approach 0. Hence f is not
differentiable at (0, 0).

Spring 2011 #10. Suppose that f is a function defined on an open subset G of R2 and that (x0 , y0 ) G.

1. Define what it means for f to be differentiable at (x0 , y0 ).

2. Show that if f x and


f
y exist and are continuous on an open set containing (x0 , y0 ), then f is differentiable
at (x0 , y0 ) G.

1. f is differentiable at (x0 , y0 ) if there exists a linear map Df such that


||f (x, y) (f (x0 , y0 ) + (Df )(x x0 , y y0 ))||2
lim = 0.
(x,y)(x0 ,y0 ) ||(x, y) (x0 , y0 )||2
2. We focus on the case where f maps into the real numbers. Let > 0. Using continuity of the first
partials, select > 0 such that if ||(x, y) (x0 , y0 )|| , then

f
(x, y) f (x0 , y0 ) and f (x, y) f (x0 , y0 ) .

x x y y

Fix x, y such that ||(x, y) (x0 , y0 )|| . By the Mean Value Theorem, there exists some x lying between
x and x0 and y lying between y and y0 such that
f
f (x, y) f (x, y0 ) = (x, y )(y y0 )
y
and
f
f (x, y0 ) f (x0 , y0 ) = (x , y0 )(x x0 ).
x
Thus
f f
f (x, y) (f (x0 , y0 ) + (x x0 ) (x0 , y0 ) + (y y0 ) (x0 , y0 ))
x y
   
f f f f
= (x x0 ) (x , y0 ) (x0 , y0 ) + (y y0 ) (x, y ) (x0 , y0 ) .
x x y y
The expressions in braces have norm less than , hence by the Cauchy-Schwarz inequality, the norm of this
expression is less than 2||(x, y) (x0 , y0 )||. Thus f is differentiable at (x0 , y0 ).

Fall 2003 #2. Let f : R R be an infinitely often differentiable function. Assume that for each element
x [0, 1] there is a positive integer m such that the m-th derivative of f at x is not zero. Prove that there
exists an integer M such that the following stronger statement holds: For each element x [0, 1], there is a
positive integer m with m M such that the m-th derivative of f at x is not zero.

For each positive integer n, let


Sn = {x [0, 1] : there exists 0 < m n with f (m) (x) 6= 0}.

47
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Since f is infinitely differentiable,


S its derivatives are continuous, so the sets Sn are a union of open sets and
hence open. By assumption, n1 Sn = [0, 1]. But [0, 1] is compact, so there exists some n1 < < nk such
Sk
that [0, 1] = i=1 Sni . Set M = nk . Note the Sn are increasing with n, hence

[0, 1] = SM ,

as desired.

Fall 2007 #2. Let f : (a, b) R be continuous and differentiable in (a, b) \ {c}. If limxc f 0 (x) = d R,
show that f is differentiable at c, and f 0 (c) = d.

I assume c (a, b). Let > 0. Select > 0 such that if x (a, b) with |x c| , then

|f 0 (x) d| .

Let 0 < 0 < /2. Choose z (a, b) with |z c| 0 < /2. Since f is differentiable at z, there exists 1 > 0
such that if |x z| 1 ,
|f (x) (f (z) + f 0 (z)(x z)| .
For any x (a, b) with |x c| min( 0 /2, 1 /2), we have |x z| 1 and

|f (x) (f (c) + d(x c))| |f (x) (f (z) + f 0 (z)(x z))| + |f (z) f (c)| + |f 0 (z)(x z) d(x z)| + |d(z c)|

+ + d 0 .
Choosing and 0 small enough, we see that f is differentiable at c with f 0 (c) = d.

Fall 2007 #9. Suppose un : R R is differentiable and solves

u0n (x) = F (un (x), x),

where F is continuous and bounded.

(a) Suppose un u uniformly. Show that u is differentiable and solves

u0 (x) = F (u(x), x).

(b) Suppose
u0 (x) = F (u(x), x), u(x0 ) = y0
has a unique solution u : R R and un (x0 ) converges to y0 as n . Show that un uniformly converges
to u.

(a) The given ODE is equivalent, via the fundamental theorem of calculus to the following integral
equation: Z x
un (x) = un (x0 ) + F (un (t), t)dt.
x0

If we show that u(x) also satisfies this integral equation, then by the fundamental theorem of calculus, it is
differentiable with derivative satisfying the original ODE. Let > 0 and fix x 6= x0 . Since F is continuous,
there exists < /3 such that if |u v|sup < , then |F (u, x) F (v, x)| /(3(x x0 )). From uniform
convergence, there exists N such that if n N , |un u|sup . Thus for all n N ,

|F (un , x) F (u, x)| /3.

48
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Also, for any x, |u(x) un (x)| < . Therefore,


Z x Z x
|u(x) u(x0 ) F (u(t), t)dt| = |u(x) un (x) + un (x0 ) u(x0 ) + (F (un (t), t) F (u(t), t))dt|
x0 x0
Z x
|u(x) un (x)| + |u(x0 ) un (x0 )| + |F (un (t), t) F (u(t), t)|dt
x0

/3 + /3 + (x x0 ) = .
3(x x0 )
Thus u(x) satisfies the integral equation and hence the given ODE.

(b) It suffices to show the un converge uniformly, since part (a) then implies the un converge to some
solution u0 which satisfies the given system. By the assumption of uniqueness, u = u0 , so the un converge to
u.
We want to use the Arzela-Ascoli Theorem on the un , so we need them to be equicontinuous and uniformly
bounded. Consider the compact set [M, M ]. Since F is bounded (say by K),
Z y
|un (y) un (x)| |F (un (t), t)|dt K|y x|.
x

Thus the un are equicontinuous.


Let > 0. Since the un (x0 ) converge to y0 , there exists N such that for all n N , |un (x0 ) y0 | .
Then the above integral inequality is enough to guarantee that the un are uniformly bounded.
So the un , along with any subsequence unk , satisfy the hypothesis of the Arzela-Ascoli theorem on
[M, M ], so every subsequence of the un has a uniformly convergent sub-subsequence. Therefore, the un
are uniformly convergent on [M, M ].
NOTE: This does not obviously imply they converge to u on [M, M ]. The statement to prove could be
false. We would need the guarantee of a unique solution on every closed interval with x0 as an endpoint.

Spring 2007 #8. Suppose the functions fn are twice continuously differentiable on [0, 1] and satisfy

lim fn (x) = f (x) for all x [0, 1], and


n

|fn0 (x)| 1, |fn00 (x)| 1 for all x [0, 1], n 1.


Prove that f (x) is continuously differentiable on [0, 1].

Mean Value Theorem + Arzela-Ascoli on fn and fn0 implies uniform convergence of both of these se-
quences. Uniform convergence of continuous functions to a continuous function then implies f and limn fn0
are continuous. Taos Theorem 14.7.1 implies f 0 = limn fn0 , so f 0 is continuous and f is continuously
differentiable.

49
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Riemann integration

Fall 2003 #4; Fall 2010 #2; Spring 2007 #9. Let f : R R be a continuous function. State the
definition of the Riemann integral Z 1
f (x)dx
0
and prove that it exists.

First, we require that f is bounded on [0, 1] to be Riemann integrable. We define


Z 1 Z 1
f = sup{ g : g a piecewise constant function on [0, 1] which minorizes f }
0 0

and
Z 1 Z 1
f = inf{ g : g a piecewise constant function on [0, 1] which majorizes f },
0 0
integrate piecewise constant functions in the obvious way, and say that f is Riemann integrable if
Z 1 Z 1
f= f.
0 0

Suppose f : R R is continuous. Let > 0. Since f is continuous on a compact set, f is bounded and
uniformly continuous. Choose > 0 such that if |x y| , then |f (x) f (y)| /2. Choose N such
that N1 . Form the equally spaced partition x0 = 0 < x1 < < xN = 1 with spacing 1/N . Define
f : [0, 1] R so that f (x) = f (xn ) + /2 for x [xn , xn+1 ) and f (1) = 1. Then f is piecewise constant and
majorizes f .
Likewise, define f . We compute
Z 1 Z 1 N 1
X 1
f f = .
0 0 n=0
N

Since was arbitrary, f is integrable.

Fall 2012 #2. Spring 2013 #1. Let f be a bounded, non-decreasing function on the closed interval [0, 1].
R1
Prove that 0 f (x)dx exists.

Define the evenly-spaced partition 0 = x0 < < xN = 1. Note that f : [0, 1] R given by f (x) =
f (xn+1 ) on (xn , xn+1 ] and f (0) = f (0) is piecewise constant and majorizes f . Likewise, define f by the left
endpoint. Then the integral of f f is a telescoping series which tends to 0 as N .

Fall 2007, #11. Let f be a bounded real function on [0, 1]. Show that f is Riemann integrable if and only
if f 3 is Riemann integrable.

Note f 3 is bounded on [0, 1]. Use majoring piecewise constant functions of f to construct majorizing
piecewise constants of f 3 , and the identity

f 3 (x) f 3 (y) = (f (x) f (y))(f 2 (x) + f (x)f (y) + f 2 (y)).

Rb
Spring 2011 #9. Prove that if f (x) is a continuous function on [a, b] and f (x) 0, then a
f (x) = 0
implies that f = 0.

50
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Standard.

Fall 2011 #5. Give an example of a function f (x) on [0, 1] with infinitely many discontinuities, but which
is Riemann integrable. Include proof (dont just quote some theorem).

Just make something monotone increasing. Use 1/2n - sized intervals.

Fall 2010 #11. Find the function g(x) which minimizes


Z 1
|f 0 (x)|2 dx
0

among smooth functions f : [0, 1] R with f (0) = 0 and f (1) = 1. Is the optimal solution g(x) unique?

In general, the Euler-Lagrange equation provides such optimizing functions.


Note that
Z 1 Z 1 Z 1 Z 1
(f 0 (x) 1)2 = (f 0 (x))2 dx 2 f 0 (x)dx + 1 = (f 0 (x))2 dx 1,
0 0 0 0

and the left hand side is minimized only by f (x) = x, thus only f (x) = x minimizes
Z 1
(f 0 (x))2 dx.
0

Spring 2011 #8; Fall 2008 #3; Spring 2008 #2. Give examples:

1. A function f (x) on [0, 1] which is not Riemann integrable, for which |f (x)| is Riemann integrable.
R1
2. Continuous functions fn and f on [0, 1] such that fn (t) f (t) for all t [0, 1] but 0
fn (t)dt does not
R1
converge to 0 f (t)dt.

(a) -1 on rationals, 1 on irrationals.

(b) Small triangle near 0 with integral 1, length 1 / n, height n, converges to 0.

Fall 2002 #4; Spring 2009 #10; Spring 2013 #12. (a) Rigorously justify the following:
1 N
(1)n
Z
dx X
2
= lim .
0 1+x N
n=0
2n + 1

1 1 1
(b) Deduce the value of 1 3 + 5 7 + .

(a) For any |x| < 1, the series



X
(1)n x2n
n=0

is absolutely convergent (as a geometric series). Fix a < 1. The partial sums of this series converge on the
compact set [0, a], thus they converge uniformly on [0, a]. Thus for any a [0, 1),
Z a Z
1X Z 1 N
dx n 2n
X
= (1) a = lim (1)n a2n
0 1 + x2 0 n=0 0 N n=0

51
UCLA Basic Exam Problems and Solutions Brent Woodhouse

N
1X N
(1)n 2n+1
Z X
= lim (1)n a2n = lim a .
N 0 n=0 N
n=0
2n + 1
By the alternating series test,

X (1)n
n=0
2n + 1
converges, hence by applying Abels Theorem in the final equality below,

1 a N N
(1)n 2n+1 (1)n
Z Z
dx dx X X
= lim = lim lim a = lim .
0 1 + x2 a1 0 1 + x2 a1 N
n=0
2n + 1 N
n=0
2n + 1

R1 dx
(b) We know 0 1+x2
= arctan x|10 = arctan 1 arctan 0 = 4, thus

1 1 1
1 + + = .
3 5 7 4

Winter 2002 #1. (a) State some reasonably general conditions under which this differentiation under
the integral sign formula is valid:
Z d Z d
d f
f (x, y)dy = dy.
dx c c x

(b) Prove that the formula is valid under the conditions you gave in part (a).

f
(a) Let f : [a, b] [c, d] R. Suppose x exists on (a, b) [c, d] and extends to a continuous function on
[a, b] [c, d]. Let
Z b
F (x) = f (x, y)dy.
a
Then Z b
d f
F (x) = (x, y)dy.
dx a x

(b) For h with x + h [a, b], we estimate


Z  
F (x + h) F (x) Z b f b f (x + h, y) f (x, y) f
(x, y)dy = (x, y) dy

h x h x

a a
Z b
f (x + h, y) f (x, y) f


(x, y) dy
a
h x
By the mean value theorem, there exists c (0, 1) such that

f (x + h, y) f (x, y) f
= (x + ch, y).
h x

Since f
x is continuous on the compact set [a, b] [c, d], it is uniformly continuous on this set. Choose such
that ||(x, y) (x0 , y 0 )|| implies

f 0 0
(x , y ) f (x, y) .

x x ba

52
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Then using the above estimate, for h < ,


Z
F (x + h) F (x) Z b f b
f (x + h, y) f (x, y) f


(x, y)dy (x, y) dy

h a x h x

a

Z b
f
Z b
(x + ch, y) f (x, y) dy

= = .
a
x x
a b a
Since > 0 was arbitrary, we have the desired equality.

Spring 2004 #3; Spring 2006 #1. Show that if fn are Riemann integrable functions on [0, 1] and fn
converges to f uniformly, then f is Riemann integrable.
Fall 2004 #3. Show that if fn f uniformly on the bounded closed interval [a, b], then
Z b Z b
fn (x) dx f (x) dx.
a a

Let > 0. There exists N such that for all n N and all x [a, b], |fn (x) f (x)| /2. By definition of
Rb Rb
the Riemann integral, there exists some fn which is piecewise constant, majorizes fn and a fn a fn .
It follows that fn + is piecewise constant and majorizes f . Thus for any n N ,
Z b Z b Z b Z b
f (fn + ) = ( fn ) + (b a) fn + (b a + 1).
a a a a

Likewise,
Z b Z b
f fn (b a + 1).
a a

Letting 0, we see that f is Riemann integrable and


Z b Z b
fn f.
a a

Spring 2010 #12. Assume that {fn } is a sequence of nonnegative continuous functions on [0, 1] such that
R1
limn 0 fn (x)dx = 0. Is it necessarily true that

(a) There is a B such that fn (x) B for x [0, 1] for all n?

(b) There are points x0 in [0, 1] such that limn fn (x0 ) = 0?


Prove your answers.

(a) No, triangles to height n, width 1/n2 .

(b) No, move triangles in (a) around.

Fall 2005 #3. (a) Prove that if fj : [0, 1] R is a sequence of continuous functions which converges
uniformly on [0, 1] to a (necessarily continuous) function F : [0, 1] R then
Z 1 Z 1
F 2 (x)dx = lim fj2 (x)dx.
0 j 0

53
UCLA Basic Exam Problems and Solutions Brent Woodhouse

(b) Give an example of a sequence fj : [0, 1] R of continuous functions which converges to a continuous
function F : [0, 1] R pointwise and for which
Z 1
lim fj (x)2 dx exists but
j 0
Z 1 Z 1
lim fj2 (x)dx 6= F 2 (x)dx.
j 0 0
(fj converges to F pointwise means that for each x [0, 1], F (x) = limj fj (x)).

(a) Since F is continuous on a compact interval, it is bounded. Since the fj converge to F uniformly, it
follows that there exists J such that all fj with j J and f are uniformly bounded (say in [M, M ]). Let
> 0. Choose J 0 > J such that |fj F |sup /(2M ) for all j J 0 . It follows that

|fj2 F 2 |sup = |fj F |sup |fj + F |sup (/(2M ))(2M ) = .

Hence (fj2 ) 2
j=1 converges uniformly to F . The result follows by previous exercises.

(b) Let fj (x) = j 3/2 (1/j x) for x [0, 1/j] and 0 elsewhere. Then fj is continuous for each j and
Z 1 Z 1/j
2 3
fj (x) = j (1/j 2 2x/j + x2 )dx = j 3 (1/j 3 1/j 3 + 1/(3j 3 )) = 1/3.
0 0

Thus Z 1
lim fj2 (x)dx = lim 1/3 = 1/3,
j 0 j

but F = 0, so Z 1 Z 1
lim fj2 (x)dx = 1/3 6= 0 = F 2 (x)dx.
j 0 0

Spring 2005 #8. Suppose f : R R is C 1 (i.e., continuously differentiable). Show that


n    
X j1 j
lim f
f
n
j=1
n n

is equal to Z 1
|f 0 (t)|dt.
0

Note f 0 is uniformly compact on [0, 1]. Let > 0. Let N be large enough that for all x, y [0, 1]
with |x y| < N1 , we have ||f 0 (x)| |f 0 (y)|| . Fix n N . By the mean value theorem, there exists
xn (j1/n, j/n) such that f ( j1 j 1 0 0
n )f ( n ) = n f (xn ). Also let |f | achieve its maximum at yn [j1/n, j/n]
for each n. Then

Z 1 n n n
0
X j 1 j 1 X
0 1 X
0

|f (t)|dt f ( n ) f ( n ) n
|f (yn )| |f (xn )|

0 j=1 j=1 n j=1
n
1X 0 1
||f (yn )| |f 0 (xn )|| (n) .
n j=1 n

Hence
n     Z 1
f j 1 f j =
X
|f 0 (t)|dt.

lim
n
j=1
n n 0

54
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Fall 2007 #5. (a) Show that, given a continuous function f : [0, 1] R, which vanishes at x = 1, there is
a sequence of polynomials vanishing at x = 1 which converges uniformly to f on [0, 1].

(b) If f is continuous on [0, 1], and


Z 1
f (x)(x 1)k dx = 0 for each k = 1, 2, . . . ,
0

show that f (x) is identically 0.

(a) Fix > 0. By the Weierstrass Approximation Theorem, there exists a polynomial P (x) such that
|P (x) f (x)| for all x [0, 1]. Consider the sequence of polynomials (P (x)(1 xn ))
n=1 . Clearly these
polynomials vanish at x = 1. We show they converge uniformly to f on [0, 1]. Since f (1) = 0, |P (1)| < /2.
By continuity of P , there exists a > 0 such that |P (x)| < when |1 x| < . Let M = sup[0,1] P (x)
and let n be large enough that xn < /M on [0, 1 ]. Then if x [0, 1], either x > 1 , in which case
|xn P (x)| |P (x)| < , or x 1 , in which case |xn P (x)| < (/M )|P (x)| .
For any x [0, 1],
|P (x)(1 xn ) f (x)| |P (x) f (x)| + |P (x)xn | 2.
Thus we have the desired uniform convergence.

(b) Let > 0. Select a sequence of polynomials (Pn ) n=1 vanishing at x = 1 which converges uniformly
to f on [0, 1]. Choose N such that |Pn f |sup for all n N . Now each Pn can be written as a finite
sum of polynomials of the form (x 1)k for k 1, hence
Z 1
f (x)Pn (x)dx = 0.
0

Since f is bounded on [0, 1], f Pn also converges uniformly to f 2 . Thus by uniform convergence,
Z 1 Z 1 Z 1
f 2 (x)dx = f (x)( lim Pn )dx = lim f (x)Pn (x)dx = lim 0 = 0.
0 0 n n 0 n

From here it follows easily that f is identically 0.

Spring 2007 #6. Consider the integral equation


Z t
y(t) = y0 + f (s, y(s))ds
0

where f (t, y) is continuous on [0, T ] R and is Lipschitz in y with Lipschitz constant K. Assume that you
have shown that the iterates defined by
Z t
y n (t) = y0 + f (s, y n1 (s))ds, y 0 (t) identically y0
0

converge uniformly to a solution y(t) of the given integral equation. Show that if Y (t) is a solution of the
given integral equation and satisfies |Y (t) y0 | C for some constant C and all t [0, T ], then Y (t) agrees
with y(t) on [0, T ].

Let > 0. We estimate

|Y (t) y(t)| |Y (t) y n (t)| + |y(t) y n (t)|.

The second term on the right can be made arbitrarily small by uniform convergence, so we focus on the first.
It follows that Z Z t t
|Y (t) y n (t)| K n |Y (sn ) y0 |ds1 . . . dsn C(Kt)n .
0 0

55
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Choosing n arbitrarily large, this implies Y (t) = y(t) for all t [0, 1/K). Now repeating the previous
argument with the initial condition at t = i/K for each i necessary, we obtain Y (t) = y(t) on [0, T ].

Fall 2010 #10. Suppose f is bounded and Lipschitz continuous. For k N, define xk (t) : [0, 1] R by
xk (0) = 0 and
xk (t) = xk (n2k ) + (t n2k )f (xk (n2k ))
for
n2k < t (n + 1)2k , n N.
Explain why xk (t) uniformly converges to a solution x(t) : [0, 1] R of the ODE

x0 (t) = f (x(t)), x(0) = 0,

as k .

This is Eulers Method of solving ODEs. Petersens solution in class is a less direct (and probably simpler)
approach than this:
Let B be a bound for f and suppose |f (x1 ) f (x2 )| L|x1 x2 | for all x1 , x2 R.
We now verify that xk (t) converges to some x(t) as k . Let t [0, 1]. Fix > 0. Select K
such that 2K . If t = 0, then clearly |xk (t) xk0 (t)| = 0 . Now for m 0 assume inductively
that (xk (t)) k=1 converges uniformly to some x(t) for all t m2
K
. Increase K as necessary so that
|xk (t) xk0 (t)| for all k, k K. Let k, k K. Now there exists n, n0 such that n2k < t (n + 1)2k
0 0
0
and n0 2k < t (n0 + 1)2k . It follows that
0 0 0
|xk (t) xk0 (t)| = |xk (n2k ) xk0 (n0 2k ) + (t n2k )f (xk (n2k )) (t n0 2k )f (xk0 (n0 2k )|
0 0 0
|xk (n2k ) xk0 (n0 2k )| + B|n2k n0 2k | + (t n0 2k )|f (xk (n2k )) f (xk0 (n0 2k ))|
0
+ B2K + 2K L|xk (n2k ) xk0 (n0 2k )| (1 + B + L).
After finitely many such iterations, we cover all of [0, 1]. Hence there exists x(t) such that xk x pointwise.
The above analysis also shows the sequence is uniformly Cauchy, hence its convergence to x is uniform.
Then since the xk are continuous, x is continuous. We now verify that x(t) is a solution to the ODE. Let
> 0 and fix t (0, 1]. Choose for continuity of f at x(t). Clearly x(0) = limk xk (0) = limk 0 = 0.
Select K such that |xk x|sup for all k K. By the continuity of f , we can adjust t by a tiny amount
so that t does not take the form n2k . Let k K and select n such that n2k < t (n + 1)2k . By the
definition of xk ,
x0k (t) = f (xk (n2k )).
Let h < 2k+1 . Then the derivative quotient for xk is -close to f (xk (n2k )). Then

x(t + h) x(t) 1
| f (x(t))| k+1 (|x(t + h) xk (t + h)| + (x(t) xk (t))|)
h 2
xk (t + h) xk (t)
+| f (xk (t))| + |f (xk (t)) f (x(t))|
h
2k1 2 + + .
Hence the limit as h 0 of the left hand side is 0, so x(t) is differentiable with x0 (t) = f (x(t)), as desired.

Fall 2008 #10. Given v = (v1 , . . . , vn ) Rn , we let ||v|| = ( |vj |2 )1/2 . If f = (f1 , . . . , fn ) : [a, b] Rn is
P
a continuous function, we define
!
Z b Z Z b b
f (t)dt = f1 (t)dt, . . . , fn (t)dt .
a a a

56
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Prove that Z b Z b
|| f (t)dt|| ||f (t)||dt.
a a

Rb Rb
Let y = (y1 , . . . , yk ), with yj = fj (t)dt. Then by definition y = a f (t)dt. Note that
a

n
X X n Z b Z b Xn
||y||2 = yj2 = yj fj (t)dt = yj fj (t)dt .
j=1 j=1 a a j=1

By the Cauchy-Schwarz inequality,


n
X
yj fj (t) ||y||||f (t)||,
j=1

thus since the integral is monotonic,


Z b
||y||2 ||y||||f (t)||,
a
so Z b
||y|| ||f (t)||.
a

R
Fall 2009 #10. (i) Let I = [0, 2]. If f : I R is a continuous function such that I
f (x)dx = 36, prove
that there is an x I such that f (x) = 18.

(ii) Consider I 2 R2 , and let g : I 2 R be a continuous function such that


R
I2
g(x, y)dxdy = 36. Prove
that there is (x, y) I 2 such that g(x, y) = 9.
Rt
(a) Let h(t) = 0 f (x)dx. Then by the fundamental theorem of calculus, h is differentiable with h0 (x) =
f (x). The mean value theorem then ensures there exists x [0, 2] = I with
Z
36 = f (x)dx = h(2) h(0) = h0 (x)(2 0) = 2f (x).
I

Thus f (x) = 18.

(b) Suppose for the sake of contradiction that there is not (x, y) I 2 such that g(x, y) = 9. Then the
2 2
R either g(x, y) > 9 for all (x, y) I or g(x, y) < 9 for all (x, y) I .
intermediate value theorem implies that
Either leads to a contradiction with I 2 g(x, y)dxdy = 36.

Spring 2009 #12. Let F : R3 R3 and : R3 R be smooth functions. Show that

div(F ) =

for all points (x, y, z) R3 if and only if


Z Z Z Z Z
F dS = dxdydz

for all balls (with all radii r > 0 and all possible centers). [You may use the various standard theorems of
vector calculus without proof.]

Write F (x, y, z) = (F1 (x, y, z), F2 (x, y, z), F3 (x, y, z)). Since F is smooth, Fi is smooth. We verify the
divergence theorem for one of the components, and the others follow similarly. Then by adding up the results

57
UCLA Basic Exam Problems and Solutions Brent Woodhouse

on both sides, we get the desired equality. We can express = {(x, y, z) : f1 (x, y) z f2 (x, y), (x, y) D},
where D is the unit disc. We can split into S1 = {(x, y, z) : z = f1 (x, y), (x, y) D} and S2 = {(x, y, z) :
z = f2 (x, y), (x, y) D}. Then by the fundamental theorem of calculus and the definition of surface integrals,
Z Z Z Z Z Z f2 (x,y)
F3 F3
dzdxdy = dzdxdy
z D f1 (x,y) z
Z Z
= F3 ((x, y, f2 (x, y))) F3 ((x, y, f1 (x, y)))dxdy
D
Z Z Z Z
= F3 ((x, y, f2 (x, y))) F3 ((x, y, f1 (x, y))dxdy
D D
Z Z Z Z Z Z
= (0, 0, F3 ) dS + (0, 0, F3 ) dS = (0, 0, F3 ) dS.
S2 S1

Fall 2010 #12. Let us define D(t) = {x2 + y 2 r2 (t)} R2 , where r(t) : R R is continuously
differentiable. For a given smooth, nonnegative function u(x, t) : R2 R R, express the following quantity
in terms of a surface integral: ! Z
Z
d
u(x, t)dx ut (x, t)dx.
dt D(t) D(t)

[You may use various theorems in Calculus without proof.]

We use Liebnizs Rule for differentiation under the integral sign, which requires the appropriate derivative
to exist and be continuous.
First, switch to polar coordinates:
Z Z 2 Z r(t)
u(x, t)dx = u(r, , t)rdrd.
D(t) 0 0

Then by Liebnizs Rule,


!
Z r(t) Z r(t)
d dr(t) u
u(r, , t)rdr = u(r(t), , t) r+ (r, , t)dr.
dt 0 dt 0 t

Applying Liebnizs Rule again on the outer integral, we obtain

dr(t) 2
Z Z Z
d u
u(x, t)dx = r u(r(t), , t)d + (x, t)dx.
dt D(t) d 0 D(t) t

Thus the desired difference is Z 2


dr(t)
r u(r(t), , t)d.
d 0

58
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Taylor Series

Fall 2003 #5. Assume f : R2 R is a function such that all partial derivatives of order 3 exist and are
continuous. Write down (explicitly in terms of partial derivatives of f ) a quadratic polynomial P (x, y) in x
and y such that
|f (x, y) P (x, y)| C(x2 + y 2 )3/2
for all (x, y) in some small neighborhood of (0, 0), where C is a number that may depend on f but not on x
and y. Then prove the above estimate.

Let
1
P (x, y) = f (0) + fx (0)x + fy (0)y + [fxx (0)x2 + fxy (0)xy + fyx (0)xy + fyy (0)y 2 ].
2
Since f is continuous, fxy = fyx and
1
P (x, y) = f (0) + fx (0)x + fy (0)y + [fxx (0)x2 + 2fxy (0)xy + fyy (0)y 2 ].
2
By Taylors Theorem with Remainder, for any v = (x, y) R2 , defining g(t) = f ((1 t)0 + tv) = f (tv),
there exists t (0, 1) such that
g (3) (t )
R(v) := f (v) P (v) = .
3!
It follows that
g (3) (t ) X f (t v)v
= .
3! !
||=3

Since the third partials of f are continuous, they are bounded on the compact set B(0, 1). Let M be this
bound. Then for any (x, y) B(0, 1),

g (3) (t )
| | M |x3 + x2 y + y 2 x + y 3 | 4M max{|x|, |y|}3 4M (x2 + y 2 )3/2 .
3!
Note M depends solely on f , not x and y.

Winter 2006 #5. Consider a function f (x, y) which is twice continuously differentiable. Suppose that f
has its unique minimum at (x, y) = (0, 0). Carefully prove that then at (0, 0),
2
2f 2f 2f

.
x2 y 2 xy

[You may use without proof that the mixed partials are equal for C 2 functions.]

Consider the Hessian matrix,


2f 2f
!
x2 (0, 0) xy (0, 0)
H = H(0, 0) = 2f 2f
.
yx (0, 0) y 2 (0, 0)

By Taylors Theorem,
1
f (x, y) = f (0, 0) + f (0, 0) (x, y) + (x, y) H(x, y) + R(x, y).
2
Since (0, 0) is the unique minimum of f , it is a critical point, so f (0, 0) = 0. Suppose for the sake of
contradiction that det(H) < 0. Then H has a positive eigenvalue 1 and a negative eigenvalue 2 . Choose
v2 with Hv2 = 2 v2 . Then f (tv2 ) = f (0, 0) + 12 t2 2 |v2 |2 + R(tv2 ), so f (tv2 ) has a local maximum when
t = 0. Thus, (0, 0) cannot be the unique minimum point of f , a contradiction.

59
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Fall 2010 #3. Suppose f : R R and g : R2 R have continuous derivatives up to order three.

(a) State Taylors Theorem with remainder for each of f and g.

(b) Using the statement for f , prove the statement for g.

(a) For any x0 , x R, there exists between x and x0 such that


1 1
f (x) = f (x0 ) + (x x0 )f 0 (x0 ) + f 00 (x0 )(x x0 )2 + f (3) ().
2 3!
Likewise, for any (x0 , y0 ), (x, y) R2 , there exists a function h(x0 ,y0 ) : R2 R such that

g g 1 2g 2g 2g
g((x, y)) = g((x0 , y0 )) + (x x0 ) + (y y0 ) + ( 2 (x x0 )2 + 2 (x x0 )(y y0 ) + 2 (y y0 )2 )
x y 2 x xy y
X
+ h(x0 ,y0 ) ((x, y))((x, y) (x0 , y0 )) ,
||=3

and lim h(x0 ,y0 ) ((x, y)) = 0.


(x,y)(x0 ,y0 )

(b) It suffices to verify the statement for g when (x0 , y0 ) = 0. Let (x, y) R2 and define f (t) := g(tx, ty).
Then f has continuous derivatives up to order three. By the chain rule,

f 0 (t) = f (tx, ty) (x, y),

and
f 00 (t) = (x, y) H(tx,ty) (x, y)t .
By Taylors Theorem in 1 dimension, there exists a function h0 : R R such that limt0 h(t) = 0 and
1 1
f (t) = f (0) + f 0 (t)t + f 00 (t)t2 + h0 (t)t3 .
2 3!
Evaluating at t = 1, we reproduce Taylors Theorem in 2 dimensions for g.

Spring 2008 #3. Assuming that f C 4 [a, b] is real, derive a formula for the error of approximation E(h)
when the second derivative is replaced by the finite-difference formula

f (x + h) 2f (x) + f (x h)
f 00 (x) ,
h2
and h is mesh size. (Assume that x, x + h, x h (a, b)).

By Taylors Theorem, for some (x, x + h),

h2 00 h3 h4
f (x + h) = f (x) + hf 0 (x) + f (x) + f (3) (x) + f (4) ().
2 3! 4!
Likewise, for some 0 (x h, x),

h2 00 h3 h4
f (x + h) = f (x) hf 0 (x) + f (x) f (3) (x) + f (4) (0 ).
2 3! 4!
Thus
h4 (4)
f (x + h) 2f (x) + f (x h) = h2 f 00 (x) + (f () + f (4) (0 )).
4!

60
UCLA Basic Exam Problems and Solutions Brent Woodhouse

h2 4
Since f (4) is continuous, it follows that the error in approximation is about 12 |f (x)| for small h.

Fall 2007 #4. Suppose that f : R R is twice differentiable and its second derivative, f 00 satisfies
|f 00 (x)| B.

(a) Prove that


A
A3
Z
|2Af (0) f (x)dx| B.
A 3

(b) Use the result of part (a) to justify the following estimate:
Z n

b ba X 2k 1
f (x)dx f (a + (b a)) Cn2 ,

n 2n

a
k=1

where C is a constant that does not depend on n.

(a) Fix an A > 0. By Taylors Theorem, there exists, for any [A, A], some c [A, A] such that

f 00 (c)c2
f (x) f (0) f 0 (0)x = .
2
Thus for any x [A, A], |f (0) + f 0 (0)x f (x)| A2 B/2. We have
Z A Z A
|2Af (0) f (x)dx| = | f (0) + f 0 (0)x f (x) f 0 (0)xdx|
A A

Z A Z A
0
| f (0) + f (0)x f (x)dx| + | f 0 (0)xdx|
A A
Z A
|f (0) + f 0 (0)x f (x)|dx
A
A
A2 B A3
Z
dx = B.
A 2 3

2k1
(b) Fix n. For 1 k n, put fk (x) = f (x a 2n (b a)). Then

b n Z 1
2n (ba)
Z X
f (x)dx = fk (x)dx.
1
a k=1 2n (ba)

1
By applying part (a) with A = 2n (b a) to the fk (x), we obtain

b n n 1 n
ba X 2k 1 2n (ba) ba X
Z X Z
| f (x)dx f (a + (b a))| = | fk (x)dx fk (0)|
a n 2n 1
2n (ba) n
k=1 k=1 k=1

n 1
2n (ba) ba
X Z
| fk (x)dx fk (0)|
1
2n (ba) n
k=1
n
X B ba 3 B(b a)3 2
( ) = n .
3 2n 24
k=1

61
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Winter 2006 #3. Consider a function f : [a, b] R which is twice continuously differentiable (including
the endpoints). Let a = x0 < x1 < < xn = b be the uniform partition of [a, b], i.e., xi+1 xi = (b a)/n
for all 0 i < n. Show that there exists M such that for all n 1,
  Z b
1 1 1 M
f (x0 ) + f (x1 ) + + f (xn1 ) + f (xn ) f (x)dx 2 .

n 2 2 n

a

[Recall that the sum is an approximation of the integral in the Trapezoid Rule. It may be instructive to first
solve the problem for n = 1 and then address the general case.]

In the case n = 1, consider the line p(x) = f (a) + (x a)(f (b))/(b a). Note p(a) = f (a), p(b) = f (b),
Rb
and a p(x) = ( f (a) f (b)
2 + 2 )(b a). See the problem below for the general theory.
The general case follows easily from this case.
Rb
Spring 2013 #2. The approximation from Simpsons Rule for a f (x)dx is
    
2 a+b 1 f (a) + f (b)
S[a,b] f = f + (b a).
3 2 3 2

If f has continuous derivatives up to order three, prove that


Z
b
f (x)dx S[a,b] f C(b a)4 max |f (3) (x)|,


a [a,b]

where C does not depend on f .

We first want to find a quadratic polynomial p(x) such that p(a) = f (a), p(b) = f (b), and p((a + b)/2) =
f ((a + b)/2). This is

(x (a + b)/2)(x b) (x a)(x b) (x a)(x (a + b)/2)


p(x) = f (a) + f ((a + b)/2) + f (b) .
(a (a + b)/2)(a b) (a (a + b)/2)(b (a + b)/2) (b a)(b (a + b)/2)

Integrating p from a to b apparently gives the left hand side of the desired inequality.
Suppose p matches f in N + 1 places x0 , . . . , xN . For some [a, b],

(x x0 ) (x xN ) (N +1)
f (x) pN (x) = f ().
(N + 1)!

Thus
b b
(b a)N +2
Z Z
| f dx pN dx| max[a,b] (|f (N +1) (x)|).
a a (N + 1)!

62
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Jacobian

Spring 2002 #7; Winter 2002 #7. Suppose F : R2 R2 is continuously differentiable and that the
Jacobian matrix of F is everywhere nonsingular. Suppose also that F (0) = 0 and that ||F ((x, y))|| 1 for
all (x, y) with ||(x, y)|| = 1. Prove that

{(x, y) : ||(x, y)|| < 1} F ({(x, y) : ||(x, y)|| < 1}).

(Hint: Show, with U = {(x, y) : ||(x, y)|| < 1}, that F (U ) U is both open and closed in U ).

Let V = F (U ). Define O1 = V U . If O1 is closed in U , then O2 = (V U )c U is open. Thus if O1


is also open, we have that U = O1 O2 with O1 , O2 disjoint open sets. But U is connected, and 0 O1 ,
hence O2 = . Thus U = O1 = V U , so F (U ) U .
Thus it suffices to show that O1 = F (U ) U is both open and closed in U .
Let (x0 , y0 ) F (U ) U . We are given that F 0 ((x0 , y0 )) is nonsingular. Thus by the inverse function
theorem, the range of F on B((x0 , y0 ), r) contains an open neighborhood of F ((x0 , y0 )) for sufficiently small
r > 0. Thus F (U ) is open, hence U F (U ) is open in U .
Let {pn } be a sequence of points in F (U ) U converging to some p U . Then there exist qn U
with pn = F (qn ). By the Bolzano-Weierstrass Theorem, there exists a subsequence {qnk } which converges
to some q U . By continuity of F ,
p = F (q ).
Now if ||q || = 1, then ||p || = ||F (q )|| 1, so p
/ U , a contradiction. Hence ||q || < 1, so q U and
p F (U ) U . Hence F (U ) U is closed relative to U .

Fall 2003 #6. Let U = {(x, y) : x2 + y 2 < 1} be the standard unit ball in R2 and let U denote its
boundary. Suppose F : R2 R2 is continuously differentiable and that the Jacobian determinant of F
is everywhere non-zero. Suppose also that F (x, y) U for some (x, y) U and F (x, y)
/ U U for all
(x, y) U . Prove that U F (U ).

Same reasoning as the previous problem. These hypotheses are just a bit more general.

Lagrange multipliers

Spring 2003 #5. Consider the function F (x, y) = ax2 + 2bxy + cy 2 on the set A = {(x, y) : x2 + y 2 = 1}.

(a) Show that F has a maximum and minimum on A.


(b) Use Lagrange multipliers to show that if the  maximum
 of F on A occurs at a point (x0 , y0 ), then the
a b
vector (x0 , y0 ) is an eigenvector of the matrix .
b c

(a) A is a compact set, since x2 + y 2 is continuous. Thus the continuous function F achieves its maximum
and minimum on A.

(b) If a maximum occurs at (x, y, z), the method of Lagrange multipliers implies there exists such that
(F )(x, y, z) = (g)(x, y, z). In this case,

(2ax + 2by, 2bx + 2cy) = (2x, 2y).

Thus     
a b x x
= ,
b c y y
so (x, y) is an eigenvector of the given matrix.

63
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Miscellaneous

Spring 2005 #5. For a subset X R, we say that X is algebraic if there exists a family F of polynomials
with rational coefficients, so that x X if and only if p(x) = 0 for some p F .

(a) Show that the set Q of rational numbers is algebraic.

(b) Show that the set R \ Q of irrational numbers is not algebraic.

(a) Let F = {(x q) : q Q}. Then for every q Q, (x q)(q) = 0, so Q is algebraic.

(b) The set of algebraic numbers is countable, but R \ Q is uncountable.

Fall 2009 #3. The purpose of this problem is to give a multi-variable calculus proof of the geometric and
arithmetic means inequality along the concrete steps below. The inequality has numerous other proof and
naturally you are not allowed to use it (or them) below.

Let Rn+ Rn be the (open) subset of vectors all whose coordinates are positive, and f : Rn+ R be
defined by:
1
f (x1 , . . . , xn ) = x1 + + xn + .
x1 x2 xn
(i) Explain carefully why f attains a global (not necessarily unique) minimum at some p Rn+ . (Hint: what
happens when xi 0, ?)

(ii) Find p.
Q P
(iii) Deduce that if all xi R are positive and xi = 1 then xi n, with equality iff xi = 1 for all i.
(This is a special case of the geometric and arithmetic means inequality, from which the general statement
can be immediately deduced - no need to write down this part here.)

(i) As some xi 0 or xi , f (x1 , . . . , xn ) . Thus we can restrict to a compact set, and f attains
its minimum on the compact set.

(ii) The gradient of f is 0, hence p = (1, 1, . . . , 1).

(iii) Obvious from (ii).

Spring 2012 #5. Prove that there is a unique continuous function y : [0, 1] R solving the equation

y(x2 )
y(x) = ex + , x [0, 1].
2

2
Define F (x, y(x)) = ex + y(x2 ) . Then F is a contraction in y, so it has a unique fixed point. The contraction
condition forces F n (x, y(x)) to converge uniformly to the fixed point, so the fixed point is continuous.

Spring 2012 #6. Let be a smooth curve from (1, 0) to (1, 0) in R2 \ {(0, 0)} winding once around the
origin in the clockwise direction. Compute the integral
ydx xdy
Z
I() := 2 2
.
x +y

64
UCLA Basic Exam Problems and Solutions Brent Woodhouse

The problem does not specify we should show the integral is path independent, so we must merely choose
a curve and compute. Let be the unit circle with clockwise orientation, given by the parametrization

(t) = (cos(t), sin(t))

for t [0, 2). Then


2
ydx xdy sin(t)( sin(t)) cos(t) cos(t)dt
Z Z
I() := = = 2.
x2 + y 2 0 1

Spring 2013 #5. Define the polynomials Un (x), n = 0, 1, 2, . . . as follows:

U1 (x) = 1, U2 (x) = 2x, Un+1 (x) = 2xUn (x) Un1 (x).

(a) Prove that


sin(n)
Un (cos()) = .
sin()

(b) Prove that the polynomials Un (x) satisfy:


Z 1 
p 0 when m 6= n,
Um (x)Un (x) 1 x2 dx =
1 /2 when m = n.

(a) We proceed by induction. For n = 1,

Un (cos()) = 1 = sin()/ sin() = sin(n)/ sin().

Now assume inductively that


sin(n)
Un (cos()) =
sin()
for all non-negative integers n k. Then

sin(n) sin((n 1))


Uk+1 (cos()) = 2 cos()Uk (cos()) Uk1 (cos()) = 2 cos()
sin() sin()

2 cos() sin(n) (sin(n) cos() sin() cos(n))


=
sin()
sin(n) cos() + sin() cos(n) sin((n + 1))
= = ,
sin() sin()
which completes the induction.

(b) Make the change of variables x = cos(), where ranges from to 0. Then using part (a), the
integral under consideration becomes
Z 0 Z
sin(m) sin(n)
sin()( sin())d = sin(m) sin(n)d.
sin() sin() 0

Use the identity


1
sin() sin() = (cos( ) cos( + ))
2
to finish the proof.

65
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Spring 2005 #11. Let us make Mn (C) into a metric space in the following fashion:
1/2
X
dist(A, B) = |Ai,j Bi,j |2

i,j

2
(which is just the usual metric on Rn .

(a) Suppose F : R Mn (C) is continuous. Show that the set

{x R : F (x) is invertible}

is open (in the usual topology on R).

(b) Show that on the set given above, x 7 |F (x)|1 is continuous.

(a) The determinant function is continuous, thus {x R : F (x) is invertible} = {x R : det(F (x)) 6= 0}
is open as a pullback of an open set.

(b) I assume |F (x)| = det(F (x)). Then this is the composition of the inverse function, the determinant,
and F , which are all continuous on the given set, hence the composition is continuous.

Fall 2011 #2. A function f : Rn R is called convex if f satisfies

f (x + (1 )y) f (x) + (1 )f (y), for all x, y Rn , 0 1.

Assume that f is continuously differentiable and that, for some constant c > 0, the gradient f satisfies

(f (x) f (y)) (x y) c(x y) (x y), for all x, y Rn ,

where denotes the dot product. Show that f is convex.

Fix x, y Rn . Let (t) = f (tx + (1 t)y) and (t) = tf (x) + (1 t)f (y) for all t [0, 1]. We want
d
to show 0 on [0, 1]. Note ( )(0) = ( )(1) = 0. It is sufficient to show that dt ( ) is
monotone increasing in t. (The mean value theorem guarantees this by contradiction.) We compute

d
((t) (t)) = ftx+(1t)y (x y) (f (x) f (y)).
dt
Suppose t1 t2 . Then by assumption,

(ft2 x+(1t2 )y ft1 x+(1t1 )y ) (t2 t1 )(x y) c(t2 t1 )2 (x y) (x y).

Thus
(ft2 x+(1t2 )y ) (x y) > (ft1 x+(1t1 )y ) (x y),
d
so dt ( ) is monotone increasing.

66
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Linear Algebra

Important definitions and facts:

A monic polynomial p F[t] is said to be irreducible if the only monic polynomials from F[t] that divide
p and 1 and p.

We say v 6= 0 is an eigenvector of L : V V if there exists F with Lv = v. In this case is called


an eigenvalue of L. The eigenspace corresponding to is

E := ker(L 1V ).

The sum of eigenspaces for distinct eigenvalues is their direct sum.

The dual space of V is V 0 = hom(V, F).

The annihilator of a subspace M V is the subspace M V 0 such that

M = {f V 0 : f (x) = 0 for all x M }.

The geometric multiplicity of an eigenvalue is dim(E ). The algebraic multiplicity of an eigenvalue


is the number of times appears as a root of the characteristic polynomial. Geometric multiplicity of an
eigenvalue is always less than or equal to its algebraic multiplicity.

The characteristic polynomial of L is L (t) = det(L t1V ). Its roots are eigenvalues of L. Over C, there
are always n eigenvalues with 1 n = (1)n a0 and tr(L) = 1 + + n = an1 . Complex roots always
come in conjugate pairs.

An involution L is a linear operator such that L2 = 1V .

The minimal polynomial L (t) of L is the monic polynomial of smallest rank such that L (L) = 0. All
eigenvalues of L are roots of P . Note L is invertible if and only if a0 6= 0. Note L divides any polynomial
p with p(L) = 0. In particular, L divides L .

If two linear operators on an n-dimensional vector space have the same minimal polynomials of degree
n, then they have the same Frobenius canonical form and are similar.

An operator L : V V is said to be diagonalizable if we can find a basis for V that consists of eigenvectors
of L. In other words, the matrix representation for L is a diagonal matrix. Note this depends on V as well
as L.

A subspace M V is said to be L-invariant if L(M ) M .

The companion matrix of a monic polynomial p(t) F[t] is



0 0 0 0
1 0
0 1

Cp = 0 1
0 2 ,

.. .. .. .. ..
. . . . .
0 0 1 n1

67
UCLA Basic Exam Problems and Solutions Brent Woodhouse

when p(t) = tn + n1 tn1 + + 1 t + 0 .

The characteristic and minimal polynomials of Cp are both p(t) and all eigenspaces are one-dimensional.
Thus Cp is diagonalizable if and only if all the roots of p(t) are distinct and lie in F.

The cyclic subspace corresponding to x is

Cx = span{x, L(x), L2 (x), . . . , Lk1 (x)},

where k is the smallest integer with

Lk (x) span{x, L(x), L2 (x), . . . , Lk1 (x)}.

An inner product on a vector space over F (R or C) is an F valued pairing (x|y) for x, y V , i.e., a map
(x|y) : V V F, that satisfies
(1) (x|x) 0 and vanishes only when x = 0.
(2) (x|y) = (y|x).
(3) For each y V , the map x p
(x|y) is linear.
The associated norm is ||x|| = (x|x).

t
The adjoint is the transpose but with each entry conjugated, notation A = A . This satisfies

(Ax|y) = (x|A y).

Note (L2 L1 ) = L1 L2 and if L is invertible, (L1 ) = (L )1 . Note is an eigenvalue for L if and only if
is an eigenvalue for L . Moreover, these eigenvalue pairs have the same geometric multiplicity.

An orthogonal projection is a projection (E 2 = E) for which the range and the null space are orthogonal
subspaces. E is orthogonal if and only if E = E (E is self-adjoint, or Hermitian). Skew-adjoint: E = E.
When over a real space, this becomes symmetric and skew-symmetric.

A map is completely reducible or semi-simple if for each invariant subspace one can always find a comple-
mentary invariant subspace. Both self-adjoint and skew-adjoint maps are completely reducible (if L(M ) M ,
then L(M ) M .

Two inner product spaces V and W are isometric, if we can find an isometry L : V W , i.e., an
isomorphism such that (L(x)|L(y)) = (x|y).
Let L : V W be an isomorphism. Then L is an isometry if and only if L = L1 . When V = W = Rn ,
isometries are called orthogonal matrices. When V = W = Cn , isometries are called unitary matrices.

Gram-Schmidt procedure: Given a linearly independent set {x1 , . . . , xm }, set e1 = x1 /||x1 || and induc-
tively,
zk+1 = xk+1 (xk+1 |e1 )e1 (xk+1 |ek )ek
and
ek+1 = zk+1 /||zk+1 ||.

The operator norm is


||L|| = sup{||Lx|| : ||x|| = 1}.
This is finite in a finite dimensional inner product space. Note this implies ||L(x)|| ||L|| ||x|| for all x V .

68
UCLA Basic Exam Problems and Solutions Brent Woodhouse

The orthogonal complement to M in V is

M = {x V : (x|z) = 0 for all z M }.

An operator L : V V on an inner product space is normal if LL = L L. All self-adjoint, skew-adjoint,


and isometric operators are clearly normal.

Two n n matrices A and B are said to be unitarily equivalent if A = U BU , where U Un (U is


unitary). If U is orthogonal, A and B are said to be orthogonally equivalent. If A and B are unitarily
equivalent, A is normal / self-adjoint / skew-adjoint / unitary if and only if B is normal / self-adjoint /
skew-adjoint / unitary. Two normal operators are unitarily equivalent if and only if they have the same
eigenvalues (counted with multiplicities).

A unitary matrix U satisfies U U = U U = I. An orthogonal matrix O satisfies OOt = Ot O = I.

Tr(AB) = Tr(BA).

69
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Important Theorems:

The Fundamental Theorem of Algebra. Any complex polynomial of degree 1 has a root.

Characterizations of Diagonalizability. 1. V is n-dimensional, and the sum of the geometric mul-


tiplicities of all the eigenvalues is n. In particular, if all the eigenvalues are distinct, then the operator is
diagonalizable.
2. There exists p F[t] such that p(L) = 0 and

p(t) = (t 1 ) (t k )

where 1 , . . . , k are distinct.


Corollary: L is diagonalizable if and only if L (t) factors as

L (t) = (t 1 ) (t k ),

with i distinct. Since L divides L , all the i must be eigenvalues of L.


Note a polynomial p has a multiple root if and only if p and Dp have a common root.

Cayley-Hamilton Theorem. Let L be a linear operator on a finite dimensional vector space. Then L
is a root of its own characteristic polynomial:

L (L) = 0.

In particular, L (t) divides L (t).

Cyclic Subspace Decomposition. Let L : V V be a linear operator on a finite dimensional vector


space. Then V has a cyclic subspace decomposition

V = Cx1 Cxk

where each Cx is a cyclic subspace. In particular, L has a block diagonal matrix representation where each
block is a companion matrix:
[L] = Cp1 Cp2 Cpk
and L (t) = p1 (t) pk (t). Moreover, the geometric multiplicity satisfies

dim(ker(L 1V )) = number of pi s such that pi () = 0.

In particular, we see that L is diagonalizable if and only if all of the companion matrices Cp individually
have distinct eigenvalues. (In general this decomposition is not unique.)

Frobenius Canonical Form (Rational Canonical Form). Let L : V V be a linear operator on a


finite dimensional vector space. Then V has a cyclic subspace decomposition such that the block diagonal
form of L,
[L] = Cp1 Cp2 Cpk
has the property that pi divides pi1 for each i = 2, . . . , k. Moreover, the monic polynomials p1 , . . . , pk are
unique. (The pi are called the similarity invariants or invariant factors for L. Similar matrices have the same
similarity invariants.)

Jordan-Chevalley decomposition. Let L : V V be a linear operator on an n-dimensional complex


vector space. Then L = S + N , where S is diagonalizable, N n = 0, and SN = N S.

70
UCLA Basic Exam Problems and Solutions Brent Woodhouse

For p(t) = (t )n , Cp is similar to a Jordan block



1 0 0 0
0 1 0 0
.. ..

..
0 0 . . .
[L] = .

..
0 0 0
. 1 0

. . .
.. .. ..

1
0 0 0 0

Moreover the eigenspace for is 1-dimensional and is generated by the first basic vector.

Jordan Canonical Form Let L : V V be a complex linear operator on a finite dimensional vector
space. Then we can find L-invariant subspaces M1 , . . . , Ms such that

V = M 1 Ms

and each L|Mi has a matrix representation of the form



i 1 0 0 0
0 i 1 0 0
.. ..

..
0 0 i . . .
[L] =

..
0 0 0
. 1 0
. .. ..
..

. . i 1
0 0 0 0 i

where i is an eigenvalue for L. Note each eigenvalue corresponds to as many blocks as the geometric
multiplicity of the eigenvalue.

Cauchy-Schwarz Inequality
|(x|y)| ||x|| ||y||
n
In R , (x|y) is usually x y.

Uniqueness of Inner Product Spaces. An n-dimensional inner product space over R, respectively
C, is isometric to Rn , respectively Cn .

Orthogonal Sum Decomposition. Let V be an inner product space and M V a finite dimensional
subspace. Then V = M M and for any orthonormal basis e1 , . . . , em for M , the projection onto M along
M is given by
projM (x) = (x|e1 )e1 + + (x|em )em .
Also projM (x) is the one and only point closest to x among all points in M .

Polarization. Let L : V V be a linear operator on a complex inner product space. Then L = 0 if


and only if (L(x)|x) = 0 for all x V .

Existence of Eigenvalues for Self-adjoint Operators. Let L : V V be self-adjoint and V finite


dimensional. Then L has a real eigenvalue.

The Spectral Theorem. Let L : V V be a self-adjoint operator on a finite dimensional inner product
space. Then there exists an orthonormal basis e1 , . . . , en of eigenvectors of L. Moreover, all eigenvalues of L
are real.

71
UCLA Basic Exam Problems and Solutions Brent Woodhouse

The Spectral Theorem for Normal Operators. Let L : V V be a normal operator on a complex
inner product space. Then there is an orthonormal basis e1 , . . . , en of V consisting of eigenvectors of L.

Schurs Theorem. Let L : V V be a linear operator on a finite dimensional complex inner product
space. It is possible to find an orthonormal basis e1 , . . . , en such that the matrix representation [L] is upper
triangular in this basis.

72
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Recurring Linear Algebra Problems

Fall 2001 #9; Fall 2002 #10; Spring 2006 #9. Let A Mn (C) (Mn (R)) be a normal (self-adjoint)
matrix. Prove that there exists an orthonormal basis of Cn (Rn ) such that the matrix of A is diagonal with
respect to this basis.

Real case: Let S be a real n-dimensional inner product space. We proceed by induction on n to show
that for each self-adjoint map L : S S, there exists an orthonormal basis for S consisting of eigenvectors of
L. The case n = 1 is trivial. Let n 2 and assume inductively that for all real n 1-dimensional real inner
product spaces T and self-adjoint maps L0 , there exists an orthonormal basis of T consisting of eigenvectors
of L0 .

Lemma 1: If L = L and C such that for some v Cn with v 6= 0, Lv = v, then is necessarily real.
Proof: Using that L is self-adjoint,
(v|v) = (v|v) = (T (v)|v) = (v|T (v)) = (v|T (v)) = (v|v) = (v, v).
Thus since v 6= 0, (v|v) 6= 0, so = , and is real.

Lemma 2: Let be an eigenvalue of L and V be the eigenspace corresponding to . Then V is L-invariant.


Proof: For any v V and u V ,
(L(u)|v) = (v|L (u)) = (v|L(u)) = (v|u) = 0.
Thus L(u) V , so L(V ) V and V is L-invariant.

Let A be a matrix representation of L. Let A (t) = det(A tIdnn ) be the characteristic polynomial of
L. The fundamental theorem of algebra guarantees there exists C such that L () = 0. Thus AIdnn
taken as an operator on Cn is not invertible, so ker(A Idnn ) 6= {0}. Hence for some v Cn with v 6= 0,
L(v) := Av = v. By Lemma 1, R.
Thus A () = det(A Idnn ) = 0, so there exists v Rn with v 6= 0 and L(v) = v. Let V = span(v).
Since V is 1-dimensional, it follows that V is n 1 dimensional. By Lemma 2, V is L invariant, thus
L|V : V V . Since L is self-adjoint, L|V is also self-adjoint. Applying the induction hypothesis
to LV , there exists an orthonormal basis {u2 , . . . , un } of V consisting of eigenvectors of L|V . Setting
v
u1 = ||v|| V , u1 is a unit vector perpendicular to all ui for i 2 and an eigenvector of L. Hence {u1 , . . . , un }
is an orthonormal basis for S consisting of eigenvectors of L. This completes the induction.
Let LA : Rn Rn be the linear map corresponding to A given by LA (v) := Av. Applying the above
result with S = Rn , we see there exists an orthonormal basis for Rn consisting of eigenvectors of LA . Thus
with respect to this basis, LA is diagonal, as desired.

Complex case: Decompose L = B + iC, where B and C are self adjoint and BC = CB. Since B is
self-adjoint, the above proof implies there exists an eigenvalue such that ker(B 1V ) 6= {0}. Then
C : ker(B 1V ) ker(B 1V ). Then the restriction of C to ker(B 1V ) is self-adjoint, so there exists
x ker(B 1V ) such that C(x) = x. Then
L(x) = ( + i)x,
so we have found an eigenvalue and eigenvector of L. We also have
L (x) = B(x) iC(x) = ( i)x.
Thus span{x} is both L and L invariant. It follows that M = (span{x}) is also L and L invariant. Hence
(L|M ) = L |M , so L|M : M M is also normal. We can use induction as in the real case above to finish
the proof.

73
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Spring 2002 #11; Spring 2006 #10; Spring 2008 #10; Spring 2010 #3. Let V be a complex (real)
inner product space and let {T } be a collection (or two) normal (self-adjoint) operators. If {T } pairwise
commute, prove that there exists an orthonormal basis for V consisting of vectors that are simultaneously
eigenvectors of each T .

Real case: S, T are self-adjoint commuting operators. Let 1 , . . . , k be the distinct eigenvalues of T and
let Ei = ker(T i I). Since T is self-adjoint,
k
M
V = Ei
i=1

with Ei orthogonal to Ej for i 6= j. Suppose Ei has dimension di . Suppose x Ei . Then since S and T
commute,
T (Sx) = S(T x) = S(i x) = i (Sx).
Hence Sx Ei . Thus Ei is S-invariant. Hence S|Ei is self-adjoint. Thus there exists vectors {v1i , . . . , vdi i }
that constitute an orthonormal basis for Ei and are simultaneously eigenvectors of S and T . Then
di
k [
[
vji
i=1 j=1

is a basis of orthogonal vectors for V , since the Ei s are orthogonal, and it consists of simultaneous eigen-
vectors of S and T , as desired.

Complex Case: Induction on the number in the collection. Use the trick B = 21 (A + A ) and C =
1
A ), so A = B + iC, and B, C are self-adjoint. Since A is normal, A A = AA , thus BC = CB.
2i (A
Then proceed as above.

Fall 2005 #7. Let A be a real n m matrix. Prove that the maximal number of linearly independent rows
of A is equal to the maximum number of linearly independent columns of A.

Suppose the maximal number of linearly independent rows of A is r. Let x1 , . . . , xr be a basis of the row
space of A. Suppose the ci are scalars such that

0 = c1 (Ax1 ) + c2 (Ax2 ) + + cr (Axr ) = A(c1 x1 + + cr xr ).

Then v = c1 x1 + + cr xr is in the row space of A, but since Av = 0, v is orthogonal to every vector in the
row space of A. Thus v is orthogonal to itself, so v = 0. Thus

c1 x1 + + cr xr = 0.

But x1 , . . . , xr is a basis of the row space, hence c1 = = cr = 0. Thus Ax1 , . . . , Axr are linearly
independent. Now each Axi is a vector in the column space of A, hence the dimension of the column space
of A is at least r, the dimension of the row space of A. Applying the same argument to the transpose of A,
we see that the dimension of the row space of A is at least the dimension of the column space of A. Hence
these dimensions are equal.

Fall 2008 #12; Fall 2011 #12; Fall 2012 #9. Let A be an m n real matrix and let b Rm . Suppose
Ax and Ay are both minimal distance to b (minimizing among members of Im(A)). Prove x y ker(A).

Then ||Ax b|| = ||Ay b||. It follows that


1
||A((x + y)/2) b|| ||A(x)/2 b/2|| + ||A(y)/2 b/2|| = (||A(x) b|| + ||A(y) b||) = ||A(x) b||.
2

74
UCLA Basic Exam Problems and Solutions Brent Woodhouse

By the assumption, we conclude ||A((x + y)/2) b|| = ||Ax b||. Suppose for the sake of contradiction that
xy / ker(A) so that Ax 6= Ay. It follows that Ax 6= b, Ay 6= b, and A((x + y)/2) 6= b. Then the triangles
with points Ax, A((x + y)/2), b and Ay, A((x + y)/2), b are both isosceles and similar to one another. It
follows that the two angles at A((x + y)/2) are equal. Since A is linear, these angles add to 180 degrees,
so they are both right angles. But then one leg of a right triangle has equal length to its hypotenuse, a
contradiction. Hence x y ker(A).

Fall 2003 #6; Fall 2011 #11; Fall 2008 #6; Fall 2007 #6. State and prove the Rank-Nullity Theorem.

Let T : V W be a linear mapping, where V is finite dimensional. Then

dim(V ) = dim(Ker(T )) + dim(Im(T )).

Proof: Note that the images of a basis of V will span Im(T ), hence Im(T ) is finite dimensional. Choose
a basis w1 , . . . , wn of Im(T ). There exist preimages v1 , . . . , vn with wi = T (vi ) for 1 i n. Select a basis
u1 , . . . , uk of Ker(T ). The result will follow once we show that u1 , . . . , uk , v1 , . . . , vn is a basis of V .
Let v V . Since T (v) Im(T ), there exist b1 , . . . , bn such that

T (v) = b1 w1 + + bn wn .

Then
T (b1 v1 + + bn wn v) = 0,
so there exist scalars a1 , . . . , ak such that

b1 v1 + + bn vn v = a1 u1 + + ak uk .

Thus u1 , . . . , uk , v1 , . . . , vn span V .
Now let a1 , . . . , ak , b1 , . . . , bn be scalars such that

a1 u1 + + ak uk + b1 v1 + + bn vn = 0.

Applying T ,
b1 w1 + + bn wn = 0.
Since w1 , . . . , wn are linearly independent, wi = 0 for 1 i n. Then

a1 u1 + + ak uk = 0.

Since u1 , . . . , un are linearly independent, ai = 0 for 1 i k. Thus u1 , . . . , uk , v1 , . . . , vn are linearly


independent and thus a basis for V .

Fall 2001 #7; Fall 2002 #7; Fall 2012 #12. Let T : V W be a linear transformation of finite
dimensional vector spaces. Define the transpose of T and then prove the following:
1. (Im(T )) = ker(T t )
2. Rank(T ) = Rank(T t ), where the rank of a linear transformation is the dimension of its image.

The transpose of T is the linear map T t : W V defined by T t (f ) = f T . In Petersens book, this


is called the dual map of T

1. Direction from definitions.

2. Use the Dimension Theorem and the fact that dim W + dim W = dim V for any subspace W V .

75
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Fall 2001 #10; Winter 2002 #9; Fall 2008 #7. Let V be a complex vector space and let T : V V be
a linear map. Let v1 , . . . , vn be non-zero vectors in V , each an eigenvector for a different eigenvalue. Prove
that {v1 , . . . , vn } is linearly independent.

We proceed by induction. The statement is clear for n = 1. Now suppose n 2 and the desired statement
holds for n 1. Suppose a1 v1 + + an vn = 0. Applying T ,

a1 1 v1 + + an n vn = T (0) = 0 = n (a1 v1 + + an vn ).

It follows that
a1 (1 n )v1 + + an1 (n1 n )vn1 = 0.
Since v1 , . . . , vn1 are linearly independent, and 1 i 6= 0 for all 1 i n 1, ai = 0 for 1 i n 1.
Thus
an vn = 0,
so since vn 6= 0, an = 0. Hence v1 , . . . , vn are linearly independent, which completes the induction.

Winter 2002 #11; Winter 2006 #10; Spring 2004 #10; Spring 2003 #8. Let V be a finite
dimensional complex inner product space and let T : V V be a linear map. Prove that there exists an
orthonormal ordered basis for V such that the matrix representation of T with respect to this basis is upper
triangular.

Suppose V has dimension n. We show by induction that there exists a flag of invariant subspaces

{0} V1 V2 Vn1 V,

where dim(Vk ) = k and each Vk is T -invariant with Vk = span{e1 , . . . , ek }. Clearly if n = 1, then V1 = V


and we are finished.
Now assume inductively we have such a flag for all n1 dimensional spaces V . Select an eigenvalue/vector
pair T (v) = v (using the fundamental theorem of algebra) and define Vn1 = v = {x V : (x|v) = 0}.
Since V is n-dimensional, v is n 1-dimensional and

(T (x)|y) = (x|T (y)) = (x|y) = (x|y) = 0.

Thus Vn1 is T -invariant. Applying the induction hypothesis, there exists a flag {0} V1 Vn1 ,
where dim(Vk ) = k and T (Vk ) Vk . Since Vn1 V and T : V V , we have the desired flag to V
immediately, completing the induction.

Now choose unit vectors ek Vk Vk1 for 1 k n. Then these vectors must form an orthonormal
basis of V . Then since T (ek ) Vk , we can express T (ek ) as a linear combination of e1 , . . . , ek for each k.
Hence we obtain an upper triangular matrix representation for T with respect to the basis {e1 , . . . , ek }.

Fall 2002 #9; Winter 2006 #7. Let V be a complex inner product space. State and prove the Cauchy-
Schwarz inequality.

The Cauchy-Schwarz inequality says for any x, y V ,

|(x|y)| ||x|| ||y||.

Proof: If y = 0, the inequality is obvious. Otherwise, define


(x|y)
z = x projy (x) = x y.
(y|y)
Then
(z|y) = (x|y) (x|y) = 0,

76
UCLA Basic Exam Problems and Solutions Brent Woodhouse

so z is orthogonal to y. Applying the pythagorean theorem to

(x|y)
x= y + z,
(y|y)

|(x|y)|2 |(x|y)|2
||x||2 = 2
+ ||z||2 .
||y|| ||y||2
Thus
||x|| ||y|| |(x|y)|.

Spring 2003 #7; Spring 2004 #7. Let V be a finite dimensional real vector space. Let W1 and W2 be
subspaces of V . Prove the following:
1. W1 W2 = (W1 + W2 ) .
2. (W1 W2 ) = W1 + W2 .

1. Let f W1 W2 . Then for any x W1 , f (x) = 0, and for any y W2 , f (y) = 0. Thus for any
x + y W1 + W2 , with x W1 and y W2 ,

f (x + y) = f (x) + f (y) = 0 + 0 = 0.

Hence f (W1 + W2 ) . Conversely, suppose f (W1 + W2 ) . Then for any x W1 , x W1 + W2 , so


f (x) = 0, hence f W1 . Likewise, f W2 , so f W1 W2 . Thus we have shown part 1.
2. Let f (W1 W2 ) . Then f |W1 W2 = 0. Note that

f = f |V \W1 + f |W1 .

Now (f |V \W1 )|W1 = 0 and (f |W1 )|W2 = 0, thus extending these functions by 0 to the rest of the space,
f |V \W1 W1 and f |W1 W2 . Hence f W1 + W2 . Conversely, suppose f + g W1 + W2 , where f W1
and g W2 . Then for any x W1 W2 ,

(f + g)(x) = f (x) + g(x) = 0 + 0 = 0,

hence f + g (W1 W2 ) . Thus we have shown part 2.

Fall 2003 #8b. Let T be a linear transformation from a finite dimensional vector space V with an inner
product to a finite dimensional vector space W also with an inner product. Show that the kernel (null space)
of T is the orthogonal complement of the range of T .

By definition,
ker(T ) = {x V : Lx = 0},
and
im(T ) = {x V : (x|T z) = 0 for all z W }.
Fix x V and use that (T x|z) = (x|T z) for all z W . Then if x ker(T ), then x im(T ) . Conversely,
if x im(T ) , then 0 = (x|T z) = (T x|z) for all z W . Picking z = T x, we see that T x = 0, so x ker(T ).
Thus
ker(T ) = im(T ) .
Replacing T in the above argument by T and using that T = T , we obtain

ker(T ) = im(T ) .

Spring 2006 #8; Fall 2009 #9. If A M2n+1 (R) is such that AAt = Id2n+1 , then prove that one of 1
or 1 is an eigenvalue of A.

77
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Note that det(T I) is a real polynomial with degree 2n + 1. By the intermediate value theorem, it
has a real root . Thus there exists v R2n+1 with v 6= 0 such that Av = v.
Now
(v|v) = (AA v|v) = (v|(AA ) v) = (v|A Av) = (Av|Av) = (v|v) = ||2 (v|v).
Since v 6= 0, (v|v) 6= 0, hence ||2 = 1. Since is real, = 1.

Spring 2007 #1. Spring 2011 #6. Let V and W be finite dimensional real inner product spaces, and
let A : V W be a linear transformation. Let w W . Show that the elements v V for which the norm
||Av w|| is minimal are exactly the solutions to the equations A Ax = A w.

Define S = Im(A) and let {e1 , . . . , en } be an orthonormal basis for S. The vector in S closest to w is

s := projS (w) := (w|e1 )e1 + + (w|en )en .

A straightforward calculation shows that w s S .


Using that w s S , for any z S,
(z, w s) = 0.
Let x V such that Ax = s. Then for any y V , Ay S, thus

0 = (Ay, Ax w) = (y, A (Ax w)),

hence A (Ax w) = 0. This implies A Ax = A w.


Conversely, suppose A Ax = A w. Let s = Ax. Then by the above calculation in reverse, w s S .
Thus for any t S, t s and w s are perpendicular, so

||s w||2 ||s w||2 + ||s t||2 = ||t w||2 ,

hence ||s w|| ||t w||. Thus v = x is a minimizer of ||Av w||.

Spring 2008 #11. Spring 2011 #2. Show that a positive power of an invertible matrix with complex
entries is diagonalizable if and only if the matrix itself is diagonalizable.

Suppose A is an invertible n n matrix and k be a positive integer. There exists an invertible matrix
V such that V AV 1 is diagonal. It follows that (V AV 1 )k = V Ak V 1 is also diagonal. Hence Ak is
diagonalizable.
Conversely, suppose Ak is diagonalizable. Then Ak (t) = (t 1 ) (t k ) with i distinct. Since
Ak (Ak ) = 0,
(Ak 1 ) (Ak k ).
The i are eigenvalues of Ak , thus since A is invertible, i =6= 0. It follows that the minimal polynomial of
A divides
p(t) := (tk 1 ) (tk k ).
Suppose p(r) = 0, so there exists j with rk = j . Since i 6= 0, r 6= 0. Note that
k
i ) (rk k )
X
Dp(r) = (krk1 ) (rk 1 ) (rk
i=1

k
i ) (j k )
X
= (krk1 ) (j 1 ) (j
i=1

j ) (j k ) 6= 0.
= (krk1 )(j 1 ) (j

78
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Hence p has no multiple roots, thus the minimal polynomial of A has no multiple roots. Since we are working
over C, the minimal polynomial of A necessarily factors as

A (t) = (t 1 ) (t k ),

with i distinct, so A is diagonalizable.

Winter 2011 #4. Fall 2012 #11. Show that an n by n matrix can be factored as A = LU where L is
a lower triangular matrix with ones along the diagonal and U is an upper triangular matrix provided each
determinant det(Aj ) (for j {1, . . . , n 1}) is non-zero (where Aj is the submatrix of A consisting of the
first j rows and first j columns of A).

Assume det(Aj ) 6= 0 for each j. We just need to show that Gaussian elimination does not need pivoting.
We prove by induction on k that the kth step does not need pivoting. This holds for k = 1, since A1 = (a1 1),
so, a1 1 6= 0. Assume that no pivoting was necessary for the first k steps (1 k n 1). In this case, we
have
Ek E1 A = Uk ,
where L = Ek E1 is a unit lower-triangular matrix and Uk [1..k, 1..k] is upper-triangular. Since det(Ak+1 ) 6=
0, (Uk )k+1,k+1 6= 0. Thus we can multiply Uk by unit lower-triangular matrices on the left to eliminate en-
tries (Uk )i,k+1 for k + 1 i n. Thus the (k + 1)st step of Gaussian elimination does not need pivoting,
completing the induction.

Fall 2011 #10. Spring 2013 #9. Let A be a real orthogonal matrix. Show that A is similar to a block
diagonal matrix, where each block is a scalar (which is a real eigenvalue of A) or of the form
 
cos() sin()
T, = ,
sin() cos()

where , R.

Any complex eigenvalues of A come in conjugate pairs. And as an orthogonal matrix, all eigenvalues of
A take the form ei for some . Since A is diagonalizable, there exists an invertible V Mn (C) such that
V AV 1 is a diagonal matrix consisting of eigenvalues of A.
Suppose without loss of generality that , are the first eigenvalues of A in the diagonal form. Define
 
1 1 i
U=
2 1 i

and note that  


0
U 1 U
0
has the desired form.
Also let V be a matrix with first column v and second column v. It follows that U V is real, so A is
similar to a block diagonal matrix of the desired form.

79
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Other Linear Algebra Qualifying Exam Problems

Fall 2001 #8. Spring 2004 #8 (similar) Let T : R3 R3 be the rotation by 60 degrees counterclockwise
about the plane perpendicular to the vector (1, 1, 1) and S : R3 R3 be the reflection about the plane
perpendicular to the vector (1, 0, 1). Determine the matrix representation of S T in the standard basis
{e1 , e2 , e3 }. You do not have to multiply the resulting matrices but you must determine any inverses that
arise.

Let v1 = 13 (1, 1, 1), v2 = 12 (1, 0, 1), and v3 = v1 v2 = 16 (1, 2, 1). Because v1 is fixed under
T and v2 and v3 are rotated counterclockwise by 60 degrees, the matrix representation of T in the basis
= {v1 , v2 , v3 } is
1 0 0
[T ] = 0 cos(60) sin(60) .
0 sin(60) cos(60)
Let be the standard basis. The change of basis matrix is [Id] = (v1 , v2 , v3 ). Since the columns are
orthonormal, [Id] is orthogonal, so [Id] = ([Id] )1 = ([Id] )t .
Do the same with w1 = 12 [1, 0, 1], and two other perpendicular vectors and call this basis . Then the
reflection just takes w1 w1 and leaves the others unchanged. Basis for S w.r.t. is easy. The desired
matrix representation is

[S T ] = [S] = [Id] [S]
[Id] [Id] [T ] [Id] .

Fall 2002 #8. Let T be the rotation of an angle 60 degrees counterclockwise about the origin in the plane
perpendicular to (1, 1, 2) in R3 .
i. Find the matrix representation of T in the standard basis. Find all eigenvalues and eigenspaces of T .
ii. What are the eigenvalues and eigenspaces of T if R3 is replaced by C3 ?
[You do not have to multiply any matrices out but must compute any inverses.]

(i) Let v1 = 12 (1, 1, 2). Select v2 = 13 (1, 1, 1). v3 = v1 v2 . Call this orthonormal basis and the
standard basis .
[T ]
= ....
[Id] = [v1 v2 v3 ].
Then [T ] = [Id] [T ] t
[Id] = [Id] [T ] ([Id] ) .
(ii) Find the eigenvalues and eigenspaces corresponding to [T ] :

1 0 0
[T ]
= 0
1
23 .

2
0 23 1
2

Eigenvalues : 1, ei/3 , e2i/3 .

Spring 2002 #8. Let V be a finite dimensional real vector space. Let W V be a subspace and

W := {f : V F linear |f = 0 on W }.

Prove that
dim(V ) = dim(W ) + dim(W ).

Let w1 , . . . , wk be a basis for W , and extend it to a basis w1 , . . . , wk , vk+1 , . . . , vn for V . Define fi (v) = 1
for v = vk+i and 0 otherwise. Then fi W and they are linearly independent. Also, any function in W
can be expressed as a linear combination of these. Hence {fi }nk
i=1 is a basis for W , so we get the desired
dimension equation.

80
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Spring 2002 #9. Find the matrix representation in the standard basis for either rotation by an angle in
the plane perpendicular to the subspace spanned by the vectors (1, 1, 1, 1) and (1, 1, 1, 0) in R4 . [You do not
have to multiply the matrices out but must compute any inverses.]

Additional perpendicular vectors are (1, 1, 0, 0) and (1, 1, 2, 0). Then follow previous problem.

Spring 2002 #10. Let V be a complex inner product space and W a finite dimensional subspace. Let
v V . Prove that there exists a unique vector vW W such that

||v vW || ||v w||

for all w W . Deduce that equality holds if and only if w = vW .

Let w1 , . . . , wk be an orthogonal basis for W . Define


(v|w1 ) (v|wk )
vW = projW v := w1 wk .
(w1 |w1 ) (wk |wk )

Let s = v vW . Then (s|wi ) = 0 for all 1 i k. Hence s W . Thus for any w W , s and vW w are
perpendicular. By the pythagorean theorem,

||v w||2 = ||v vW ||2 + ||w vW ||2 ||v vW ||2 ,

hence
||v w|| ||v vW ||
with equality if and only if w = vW . This equality case forces uniqueness.

Winter 2002 #8. Let T : V W and S : W X be linear transformations of finite dimensional real
vector spaces. Prove that

rank(T ) + rank(S) dim(W ) rank(S T ) max{rank(T ), rank(S)}.

From the Dimension Theorem,

dim(V ) = rank(T ) + nullity(T ),

dim(W ) = rank(S) + nullity(S),


and
dim(V ) = rank(S T ) + nullity(S T ).
Thus
rank(T ) + rank(S) dim(W ) = rank(S T ) + nullity(S T ) nullity(T ) nullity(S).
Show that
nullity(S T ) nullity(S) + nullity(T )
by appealing to a basis. This gives the first inequality.
Also, by appealing to a basis, we can check rank(S T ) min{rank(T ), rank(S)}.

Winter 2002 #10. Let V be a finite dimensional complex inner product space and f : V V a linear
functional. Show that there exists a vector w V such that f (v) = (v|w) for all v V .

Let {v1 , . . . , vn } be an orthonormal basis for V . Let {f1 , . . . , fn } be the associated dual basis. In
particular, this means fi (v) = (v, vi ) for all v V . There exists i with

f = 1 f1 + + n fn .

81
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Thus
n
X n
X n
X
f (v) = i fi (v) = i (v, vi ) = (v, i vi ).
i=1 i=1 i=1

Thus if we take
w = 1 v1 + + n vn ,
then for any v V , f (v) = (v, w) for all v V .

Fall 2003 #9. Consider a 3 by 3 real symmetric matrix with determinant 6. Assume (1, 2, 3) and (0, 3, 2)
are eigenvectors with eigenvalues 1 and 2.
(a) Give an eigenvector of the form (1, x, y) for some real x, y which is linearly independent of the two
vectors above.
(b) What is the eigenvalue of this eigenvector?

(a) By the spectral theorem, the eigenspaces of the eigenvalues are orthogonal. Thus the cross product
of the given vectors, scaled appropriately, is the answer.

(b) The eigenvalue must be 3, since the determinant is the product of the eigenvalues.

Fall 2003 #10. (a) Let t R such that t is not an integer multiple of . For the matrix
 
cos(t) sin(t)
A=
sin(t) cos(t)

prove that there does not exist a real valued matrix B such that BAB 1 is a diagonal matrix.
(b) Do the same for the matrix  
1
A=
0 1
where R \ {0}.

(a) If there were such a matrix, the diagonal entries would be the eigenvalues. Thus the eigenvalues
would be real. However, the eigenvalues satisfy

(cos(t) )2 + sin2 (t) = 0,

so
2 2 cos(t) + 1 = 0,
and
= cos(t) i sin(t) = eit ,
which are not real.

(b) Just check directly. Or note that the geometric multiplicity of the only eigenvalue 1 is 1, so A cant
be diagonalizable.

Spring 2003 #9. Let A M3 (R) satisfy det(A) = 1 and At A = I = AAt where I is the 3 3 identity
matrix. Prove that the characteristic polynomial of A has 1 as a root.

Show complex eigenvalues come in conjugate pairs. Then since det(A) = 1 is the product of the eigenval-
ues, and the product of conjugates is positive, there exists a real positive root. Then for a positive eigenvalue
with eigenvector v 6= 0,
(v, v) = (At Av, v) = (Av, Av) = ||2 (v, v).

82
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Since (v, v) 6= 0, and is a positive real number, = 1.

Spring 2003 #10. Let V be a finite dimensional real inner product space and T : V V a hermitian
linear operator. Suppose the matrix representation of T 2 in the standard basis has trace zero. Prove that T
is the zero operator.

Let A be the matrix representation of T . Since T is hermitian, A is diagonalizable. Thus there exists
an invertible matrix P such that A = P DP 1 with D a diagonal matrix consisting of eigenvalues of T . It
follows that
A2 = P D2 P 1 ,
thus the trace of A2 is
n
X
tr(P D2 P 1 ) = tr(P P 1 D2 ) = tr(D2 ) = 2i .
i=1

Since this is assumed to be 0, i = 0 for all i. It follows that D = 0, hence A = 0, so T = 0.

Fall 2004 #8. Let A = (aij ) be a real, n n symmetric matrix and let Q(v) = v Av (ordinary dot product)
be the associated quadratic form defined for v = (v1 , . . . , vn ) Rn .
1. Show that Qv = 2Av where Qv is the gradient at v of the function Q.
2. Let M be the minimum value of Qv on the unit sphere S n = {v R : ||v|| = 1} and let u S n be the
vector such that Q(u) = M . Prove, using Lagrange multipliers, that u is an eigenvector of A with eigenvalue
M.

Let Ai be the ith column of A. By the product rule,


Q
(v) = v ai + ei Av.
vi
Since A is symmetric,
ei Av = eti Av = eti At v = (Aei )t v = ati v = ai v,
thus
Q
(v) = 2ai v = 2ati v,
vi
and
Qv = 2Av.

(b) Let g(u) = ||u||2 1. Then we are trying to minimize Qu subject to the constraint g(u) = 0. Lagrange
multipliers yields the equation
Qu = g(u).
From part (a), this becomes
2Au = (2u).
Thus Au = u, so u is an eigenvector of A. Then by definition of M ,

M = Qu = u Au = u u = |u|2 = .

Fall 2004 #9. Let T : Cn Cn be a linear transformation and P (X) a polynomial such that P (T ) = 0.
Prove that every eigenvalue of T is a root of P (X).

Let be an eigenvalue of T with an eigenvector v. A simple induction shows (P (T ))(v) = P ()v. Since
the expression on the left is zero and v 6= 0, P () = 0.

83
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Fall 2004 #10. Let V = Rn and let T : V V be a linear transformation. For C, the subspace

V () = {v V : (T I)N v = 0 for some N 1}

is called a generalized eigenspace.


1. Prove that there exists a fixed number M such that V () = ker((T I)M ).
2. Prove that if 6= , then V () V () = {0}. Hint: use the following equation by raising both sides
to a high power:
T I T I
+ = I.

1. Let {v1 , . . . , vn } be a basis for V (). Let N be the maximum over i of the N such that (T I)N vi = 0
for each i. Then for any v V (), (T I)N v = 0. Hence V () = ker((T I)M ).

2. Suppose 6= . Suppose v V () V (). Note

T I T I
+ = I.

Raising both sides to the power 2M , note that every expression on the left has a factor of (T I)M or
(T I)M . Thus when evaluating at v, the left hand side is 0, but the right hand side is v.

Spring 2004 #9. Let V be a finite dimensional real inner product space under ( , ) and T : V V a linear
operator. Show the following are equivalent:
a) (T x, T y) = (x, y) for all x, y V ,
b) ||T (x)|| = ||x|| for all x V ,
c) T T = IdV , where T is the adjoint of T .
d) T T = IdV .
[T is an orthogonal map.]

(a) = (b). Then for any x V .

||T (x)||2 = (T x, T x) = (x, x) = ||x||2 .

Since the norm is non-negative, this yields (b).


(b) = (a). Then for any x, y V ,

(T x+T y, T x+T y) = (T x, T x+T y)+(T y, T x+T y) = (T x, T x)+(T x, T y)+(T y, T x)+(T y, T y) = (T x, T x)+(T y, T y)+2(T x, T y

thus

2(T x, T y) = (T (x + y), T (x + y)) (T x, T x) (T y, T y) = (x + y, x + y) (x, x) (y, y) = 2(x, y).

Hence
(T x, T y) = (x, y).
(b) = (c). For any x V ,

((T T IdV )x, (T T IdV )x) = (T T x, T T x) 2(T T x, IdV x) + (IdV x, IdV x)

= (T x, T T T x) 2(T x, T x) + (x, x) = (x, T T x) 2(x, x) + (x, x) = 0.


Thus (T T IdV )x = 0 for all x, so T T = IdV .
(c) = (b). For any x V ,
(T (x), T (x)) = (x, T T x) = (x, x).
The equivalence of (b) and (d) follows analogously to the equivalence of (b) and (c).

84
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Fall 2005 #6. (a) Prove that if P is a real-coefficient polynomial and if A is a real symmetric matrix, then
the eigenvalues of P (A) are exactly the numbers P (), where is an eigenvalue of A.
(b) Use part (a) to prove that if A is a real symmetric matrix, then A2 is non-negative definite.
(c) Check part (b) by verifying directly that det(A2 ) and trace(A2 ) are non-negative when A is real
symmetric.

(a) Send A to diagonal matrix. P (A) is similar to P (D).

(b) A2 is non-negative definite if for all v, v t A2 v is non-negative.


All eigenvalues of A are real, so all eigenvalues of A2 are non-negative.
Suppose P 1 AP = D. For any v V , there exists w with v = P w. Then
n
X
v t A2 v = wt (P 1 A2 P )w = wt D2 w = 2i wi2 0.
i=1

(c) det(A2 ) = (det(A))2 0.


n
X n X
X n
trace(A2 ) = trace(( aik akj )) = a2ik 0.
k=1 i=1 k=1

Fall 2005 #9. Suppose V1 and V2 are subspaces of a finite-dimensional vector space V .
(a) Show that
dim(V1 V2 ) = dim(V1 ) + dim(V2 ) dim(span(V1 , V2 ))
where span(V1 , V2 ) is by definition the smallest subspace that contains both V1 and V2 .
(b) Let n = dim(V ). Use part (a) to show that, if k < n, then an intersection of k subspaces of dimension
n 1 always has dimension at least n k. (Suggestion: Do induction on k.)

(a) Let v1 , . . . , vk be a basis for V1 V2 . Then there exists a basis v1 , . . . , vk , w1 , . . . , wr for V1 and
/ V2 and wi0
v1 , . . . , vk , w10 , . . . , ws0 for V2 . It follows that wi / V1 . Hence v1 , . . . , vk , w1 , . . . , wr , w10 , . . . , ws0 is
a basis for span(V1 , V2 ). This gives the desired equality.

(b) k = 1 obvious. Assume


T true for k. Let k + 1 < n. Let V1 , . . . , Vk+1 be subspaces of V of dimension
n 1. Then since span( ik Vi , Vk+1 ) V ,
\ \ \
dim( Vi ) = dim( Vi ) + dim(Vk+1 ) dim(span( Vi , Vk+1 ))
ik+1 ik ik

(n k) + (n 1) n n (k + 1),
which completes the induction.

Fall 2005 #10. (a) For each n = 2, 3, 4, . . ., is there an n n matrix A with An1 6= 0 but An = 0?
(b) Is there an n n upper triangular matrix A with An 6= 0 but An+1 = 0?

(a). Yes, take the linear map which sends en to 0 and ei to ei+1 for each i < n.
(b). I assume the field has characteristic 0.
No. Because A is upper triangular, An+1 is diagonal and (An )ii = (Aii )n . Thus if An+1 = 0, then Aii = 0
for all i, in which case A2 = 0. If n 2, then An = 0. Otherwise, n = 1, and the statement is obvious.

85
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Spring 2005 #1. Given n 1, let tr : Mn (C) C denote the trace of a matrix:
n
X
tr(A) = Ak,k .
k=1

(a) Determine a basis for the kernel (or null-space) of tr.


(b) For X Mn (C), show that tr(X) = 0 if and only if there exists an integer m and matrices
A1 , . . . , Am , B1 , . . . , Bm Mn (C) so that
m
X
X= Aj B j B j Aj .
i=1

(a) Eij , i 6= j and E11 Eii for i 2.


Dimension of image of trace is 1. Thus the dimension of the kernel of the trace is dim(Mn (C))
dim(range(trace)) = n2 1. Just need to show linear independence.

(b) = is obvious since tr(AB) = tr(BA).


=: Suppose A Mn (C) such that tr(A) = 0. Then we can write A in terms of the basis from part (a).
Note Ei,j = [Ei,i , Ei,j ] for i 6= j and E1,1 Ei,i = [E1,j , Ej,1 ] for all i.

Spring 2005 #2. Let V be a finite-dimensional vector space, and let V denote the dual space; that is,
the space of linear maps : V C. For a set W V , let

W = { V : (w) = 0 for all w W }.

For a subset U V , let



U = {v V : (v) = 0 for all U }.
(a) Show that for any subset W V , (W ) = span(W ). Recall that the span of a set of vectors is the
smallest vector sub-space that contains these vectors.
(b) Let W V be a linear subspace. Give an explicit isomorphism between (V /W ) and W . Show
that it is an isomorphism.

(a) Suppose w W . Then for all W , (w) = 0, hence w (W ). Since (W ) is a subspace


and Span(W ) is the smallest subspace containing W we have (W ) Span(W ).
Conversely, select a basis v1 , . . . , vk for Span(W ) and then extend it to a basis v1 , . . . , vn for W
P.n Let i
be the corresponding dual basis vectors. Suppose w (W ). Then there exist ai with w = i=1 ai vi ,
with ai C. Since j W for k + 1 j n,

aj = j (w) = 0.

Hence w Span(W ), so Span(W ) = (W ).

(b) Define : (V /W ) W by (T ) = [v 7 T (v + W )]. If w W , then ((T ))(w) = T (0) = 0, so


(T ) W . Thus this map is well defined. If (T ) = 0, then T (v + W ) = 0 for all v V , so T = 0. Hence
is injective. Also,

dim((V /W ) ) = dim(V /W ) = dim(V ) dim(W ) = dim(W ).

Hence must be surjective as well. Hence is an isomorphism.

Spring 2005 #3. Let A be a Hermitian-symmetric n n complex matrix. Show that if (Av|v) 0 for all
v Cn , then there exists an n n matrix T so that A = T T .

86
UCLA Basic Exam Problems and Solutions Brent Woodhouse

By the spectral theorem for Hermitian operators, we may select an orthonormal basis v1 , . . . , vn and
numbers 1 , . . . , n R such that Avi = i vi . We claim i 0 for all i. Note
i (vi |vi ) = (Avi |vi ) 0.

Define T by T (vi ) = i vi . Then T (vi ) = i vi , hence A(vi ) = T T (vi ), so A = T T .

Spring 2005 #4. Let A = Mn (C) denote the set of all n n matrices with complex entries. We say that
I A is a two-sided ideal in A if
(i) For all A, B I, A + B I.
(ii) For all A I and B A, AB and BA belong to I.
Show that the only two-sided ideals in A are {0} and A itself.

Let I A be a two-sided ideal. Suppose there is a nonzero A = (aij ) I. By multiplying A by a suitable


permutation matrix and diagonal matrix we see by (ii) that the matrix Eij I. Thus by (i), Mn (C) I.

Spring 2006 #7. A matrix T (with entries, say, in the field C of complex numbers) is diagonalizable if
there exists a non-singular matrix S such that ST S 1 is diagonalizable. Prove that if a, C with a 6= 0,
then the following matrix is not diagonalizable:

1 a 0
T = 0 1 a .
0 0

Note the eigenvalues are 1 and and both have eigenspaces of dimension 1. Thus the geometric multi-
plicities of the eigenvalues do not add to 3, the dimension of the matrix, hence it is not diagonalizable.

Spring 2006 #10. Let Y be an arbitrary set of commuting matrices in Mn (C) (i.e., AB = BA for all
A, B Y ). Prove that there exists a non-zero vector v Cn which is a common eigenvector of all elements
of Y .

We first work out the case of two matrices with AB = BA. Let be an eigenvalue of A and let V be
the eigenspace of . If v V , then
A(Bv) = B(Av) = (Bv).
Hence Bv V , so V is invariant under B. Consider the restriction of B to V . This restricted operator has
at least one eigenvalue and eigenvector (say w V is an eigenvector). Then w is an eigenvector for both A
and B.
Inductively, suppose v is a common eigenvector for A1 , . . . , An1 with eigenvalues 1 , . . . , n1 . Then
the commutative property implies the eigenspaces corresponding to each eigenvalue and matrix are invariant
under An . Take the intersection of all such eigenspaces. This is a non-empty set since it contains v. Then the
restriction of An to this intersection has an eigenvalue w, which is then a common eigenvalue for A1 , . . . , An .

Winter 2006 #8. Let T : V W be a linear transformation of finite dimensional real inner product
spaces. Show that there exists a unique linear transformation T t : W V such that
(T (v)|w)W = (v, T t (w))V
for all v V and w W .

Let v1 , . . . , vn and w1 , . . . , wm be orthonormal bases for V and W respectively. Let A be the matrix
representation of T relative to these bases. Define the map T t : W V to have matrix representation At
with respect to these bases. It follows that
(v, T t w)V = (T v, w)W .

87
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Suppose S and S 0 both satisfy

(T v, w)W = (v, Sw)V = (v, S 0 w)V

for all v V , w W .
Then
(v, Sw S 0 w)V = 0
for all v V , hence Sw S 0 w = 0, so S = S 0 .

Fall 2007 #3. Let T be a linear transformation of the vector space V into itself. If T v and v are linearly
dependent for each v V , show that T must be a scalar multiple of the identity.

Take v V and such that T v = v. Then for any w V , T w = cw and T (v + w) = c0 (v + w). This
implies c0 (v + w) = T (v + w) = T v + T w = v + cw. Hence (c0 )v = (c c0 )w. This is only generally true
if V is 1-dimensional or = c = c0 . In either case, T (v) = v for all v V .

Fall 2007 #7. Let A(x) be a function on R whose values are n n matrices. Starting from the definition
that the derivative A0 (x) is the matrix you get by differentiating the entries in A(x), show that when A(x)
is invertible and differentiable for all x, A1 (x) is differentiable, and

(A1 )0 (x) = A1 (x)A0 (x)A1 (x).

Suppose A(x) is invertible and differentiable for all x. Note

A1 = (det(A))1 adj(A),

and since the adjugate is the transpose of the cofactor matrix, which consists of sums of products of elements
of A, the adjugate is differentiable, hence A1 is differentiable.
By the definition of A1 (x),
In = A1 (x)A(x).
Supposing A1 (x) is differentiable, differentiating the i, j entry of each side and combining results,

0 = (A1 )0 (x)A(x) + A1 (x)A0 (x).

Thus
(A1 )0 (x) = A1 (x)A0 (x)A1 (x).

Fall 2007 #10. Suppose that {vj }nj=1 is a basis for the complex vector space Cn .
(a) Show that there is a basis {wj }nj=1 such that (wj |vk ) = jk . Here (, ) is the standard inner product,
(w, v) = w1 v1 + + wn vn .
(b) If the vj s are eigenvectors for a linear transformation T of Cn , show that the wj s are eigenvectors
for T , the adjoint of T with respect to the inner product.

(a) The matrix A = (v1 . . . vn ) is invertible since the vi are linearly independent. Let wj be the jth row
vector of A1 . Then
(wj , vk ) = (A1 A)jk = (In )jk = jk .

(b) Suppose T vj = j vj . Then [T ] = [T ]t . Let B be the diagonal matrix with entries j . Then
[T ]A = AB, so A1 [T ] = BA1 . Taking the transpose and conjugate of both sides,

[T ](A1 )t = [T ]t (A1 )t = (A1 )t B t = (A1 )t B.

88
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Thus T wj = j wj .

Fall 2007 #12. (a) Suppose that x0 < x1 < < xn are points in [a, b]. Define linear functions on P n ,
the vector space of polynomials of degree less than or equal to n, by setting

lj (p) = p(xj )

for j = 0, . . . , n. Show that the set {lj }nj=0 is linearly independent.


(b) Show that there are unique coefficients cj such that
Z b n
X
p(x)dx = cj lj (p)
a j=0

for all p P n .
Q
Set a0 l0 + + an ln = 0. Then taking pi (x) = j6=i (x xj ),
Y
0 = (a0 l0 + + an ln )(pi ) = a0 p(x0 ) + + an p(xn ) = i (xi xj ).
j6=i

Thus ai = 0. This works for any i, hence the lj are linearly independent.

(b) From part (a) and the fact that (Pn ) has dimension n + 1, we see that {lj }nj=0 forms a basis for (Pn ) .
Rb
Since the map which takes p to a p(x)dx is in (Pn ) , then there are unique coefficients cj with the desired
equality.

Spring 2007 #2. Let V, W, Z be n-dimensional vector spaces and T : V W and U : W Z be linear
transformations. Prove that if the composite transformation U T : V Z is invertible, then both T and U
are invertible. (Do not use determinants in your proof!)

Since U T is invertible, it is one-to-one and onto. Clearly this makes U onto. By the Dimension Theorem,
it follows that U is one-to-one. If T is not one-to-one, then U T is not one-to-one, a contradiction. Hence T
is one-to-one, and it follows by the Dimension Theorem that T is onto. Thus U and T are bijections, hence
invertible.

Spring 2007 #3. Consider the space of infinite sequences of real numbers

S = {(a0 , a1 , . . .) : an R, n = 0, 1, 2, . . .}

endowed with the standard operations of addition and scalar multiplication. For each pair of real numbers
A and B, prove that the set of solutions (x0 , x1 , . . .) of the linear recursion

xn+2 = Axn+1 + Bxn

for n 0 is a linear subspace of S of dimension 2.

Clearly the set of solutions forms a subspace. Show that (1, 0, . . .) and (0, 1, . . .) is a basis for the set
of solutions. Clearly any solution is a linear combination of these two. Also, these are necessarily linearly
independent vectors because of the first two components.

Spring 2007 #4. Suppose that A is a symmetric n n real matrix with distinct eigenvalues 1 , . . . , l ,
(l n). Find the sets
X = {x Rn : lim (xt A2k x)1/k exists}
k

89
UCLA Basic Exam Problems and Solutions Brent Woodhouse

and
L = { lim (xt A2k x)1/k : x X},
k

where Rn is identified with the set of real column vectors, and xt denotes the transpose of x.

The answers are X = Rn and L = {1 , . . . , l }.


By the spectral theorem, A is diagonalizable, so there exists an orthonormal basis {v1 , . . . , vn } such that
Avi = i vi (different i than the problem statement). Then for any x X, there exist ai such that

x = a 1 v1 + + a n vn .

We compute
xt A2k x = 21 a1 v12 + + 2n an vn2 .
Suppose i is the dominant eigenvalue. Then if ai 6= 0,

(xt A2k x)1/k = 2 .

(More detail here on exam.)


Picking x = vi for each i, we can recover 2i in this way.

Spring 2007 #5. Let T be a normal linear operator on a finite dimensional complex inner product linear
space V . Prove that if v is an eigenvector of T , then v is also an eigenvector of its adjoint T .

Since T is normal T T T T = 0.

(T v, T v) = (v, T T v) = (v, T T v) = (v, T T v) = (T v, T v).

Note that (T I) is also normal. Thus

0 = ||(T I)v|| = ||(T I) v|| = ||(T I)v||.

Hence T v = v.

Fall 2008 #8. Must the eigenvectors of a linear transformation T : Cn Cn span Cn ? Prove your
assertion.

No, take the nilpotent transformation T (v1 ) = 0, T (v2 ) = v1 . This only has 0 eigenvalues, and eigenvec-
tors (x, 0). Clearly these do not span C2 .

Fall 2008 #9. (a) Prove that any linear transformation T : Cn Cn must have an eigenvector.
(b) Is (a) true for any linear transformation T : Rn Rn ?

(a) Characteristic polynomial has a root.

(b) No, pick a non-trivial rotation.

Fall 2008 #11. Consider the Poisson equation with periodic boundary conditions on [0, 1]:

2u
= f, x (0, 1)
x2
u(0) = u(1).
A second order accurate approximation to the problem is given by the solution to the following system of
equations
Au = x2 f

90
UCLA Basic Exam Problems and Solutions Brent Woodhouse

where
2 1 0 0 1

1 2 1 0 0

0 1 2 1 0
A= ,

.. .. ..

. . .

0 0 1 2 1
1 0 0 1 2
u = [u0 , u1 , . . . , un1 ], f = [f0 , f1 , . . . , fn1 ] and ui u(xi ) with xi = ix, x = 1/n, and fi = f (xi ) for
i = 0, . . . , n 1.
(a) Show that the matrix A is singlar.
(b) What condition must f satisfy so that a solution exists?

(a) Apply it to (1, . . . , 1) to get 0. If the matrix had an inverse, applying the inverse to 0 would yield 0,
not (1, . . . , 1), a contradiction.

(b) Based on part (a), f must satisfy f0 + + fn1 = 0. Then we can reduce the system to an n 1
by n 1 matrix that is not singular, hence a solution exists.

Spring 2008 #8. Assume V is an n-dimensional vector space over the rationals Q, and T is a Q-linear
transformation T : V V such that T 2 = T . Prove that every vector v V can be written uniquely as
v = v1 + v2 such that T (v1 ) = v1 and T (v2 ) = 0.

Write v = T (v) + (v T (v)). Then T (T (v)) = T (v) and T (v T (v)) = T (v) T (v) = 0.
Note that if v = v1 + v2 with T (v1 ) = v1 and T (v2 ) = 0, then T (v) = T (v1 ) = v1 and (v T (v)) =
v T (v1 ) = v v1 = v2 . Hence v1 and v2 are unique.

Spring 2008 #9. Let V be a vector space over R.


(a) Prove that if V is odd dimension, and if T is an R-linear transformation T : V V of V , then T has
a non-zero eigenvector v V .
(b) Show that for every even positive integer n, there is a vector space V over R of dimension n, and an
R-linear transformation T : V V of V , such that there is no non-zero v V satisfying T (v) = v for some
R.

(a) See previous problems. (b) Take a block diagonal matrix consisting of 2 by 2 non-trivial rotations.

Spring 2008 #10. Suppose A is an n n complex matrix such that A has n distinct eigenvalues. Prove
that if B is an n n complex matrix such that AB = BA, then B is diagonalizable.

Let vi , i be eigenvector/eigenvalue pairs for A. Note {v1 , . . . , vn } is linearly independent. It follows that
A(Bvi ) = B(Avi ) = Bvi . Thus Bvi is an eigenvector of A with eigenvalue i . The eigenspace of i must
be one-dimensional, hence Bvi = ci vi for some ci C. It follows that B is diagonal with respect to the basis
{v1 , . . . , vn }.

Spring 2008 #11. Assume A is an n n complex matrix such that for some positive integer m the power
Am = Im . Prove that A is diagonalizable.

Note that xm 1 has no repeated roots over C. If mxm1 = 0, then x = 0, but this is not a root of
m
x 1. Hence the minimal polynomial of A has no repeated roots. Thus the minimal polynomial factors
into distinct monic linear factors with coefficients in C. Hence A is diagonalizable.

Spring 2008 #12. Let A be an n n real symmetric (ai,j = aj,i ) matrix, and let S be the unit sphere of

91
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Rn . Let x S be such that


(Ax, x) = sup(Ay, y)
S
n
P
where (z, y) = zj yj is the usual inner product on R . (By compactness, such x exists.)
(a) Prove that (x, y) = 0 implies (Ax, y) = 0. Hint: Expand

(A(x + y), x + y).

(b) Use (a) to prove x is an eigenvector for A.


(c) Use induction to prove Rn has an orthonormal basis of eigenvectors for A.
Note: If you use part (c) to prove part (a) or part (b), then your solution should include a proof of part
(c) that does not use part (a) or part (b).

(a) Assume (x, y) = 0. Then

(A(x + y), x + y) = (Ax, x + y) + (y, x + y) = (Ax, x) + (Ax, y) + (y, x) + (y, y)

= (Ax, x) + (Ax, y) + 0 + 2 ||y||2 .


1
By the choice of x, we must have ||x+y||2 (A(x + y), x + y) (Ax, x). Hence

(Ax, x) + (Ax, y) + 2 ||y||2 (||x||2 + 2 ||y||2 )(Ax, x) = (Ax, x) + 2 ||y||2 (Ax, x),

and
(Ax, y) ||y||2 ((Ax, x) 1)
Replacing by in the analysis above,

(Ax, y) ||y||2 ((Ax, x) 1).

Thus taking 0, we find that (Ax, y) = 0.

(b) Form an orthonormal basis x, y2 , . . . , yn for Rn . Write Ax = c1 x + c2 y2 + + cn yn . Then for all


i 2, (x, yi ) = 0, hence by part (a), ci = (Ax, yi ) = 0. Hence Ax = c1 x, so x is an eigenvector of A.

(c) This is the usual argument in the proof of the Spectral Theorem.

Fall 2009 #4. Let V be a finite dimensional R-vector space equipped with an inner product. For a vector
subspace U V , denote by U its orthogonal complement, i.e., the set of v V such that (v|u) = 0 for all
u U . Show that
dim(U ) + dim(U ) = dim(V ).

Let u1 , . . . , uk be an orthonormal basis for U , and extend it to an orthonormal basis u1 , . . . , un for V . It


follows that uk+1 , . . . , un is an orthonormal basis for U , hence we get the desired equality.

Fall 2009 #5. Show that if 1 , . . . , n R are all different, and some a1 , . . . , an R satisfy
X
ai ei t = 0

for all t (1, 1), then necessarily ai = 0 for all 1 i n. (Hint: you may use the differentiation operator
and a theorem in Linear Algebra on distinct eigenvalues.)

Note ei t are eigenvectors of the differentiation operator on smooth functions with support on (1, 1).
Moreover these eigenvectors have distinct eigenvalues, thus they are linearly independent. (Same proof as
in finite dimensional case for infinite dimensions and a finite number of eigenvectors.)

92
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Or, repeatedly differentiate to get the Vandermonde matrix, which has non-zero determinant.

Fall 2009 #7. Let V isomorphic to Rn be an n-dimensional vector space over R and denote by End(V )
the vector space of R-linear transformations of V . (Note that dim(End(V )) = dim(V )2 = n2 .) Then for
T End(V ) show that the dimension of the subspace W of End(V ) spanned by T k for k running through
non-negative integers, satisfies the inequality dim(W ) dim(V ) = n.

Let T End(V ). By the Cayley-Hamilton Theorem, T (T ) = 0, and T has degree n. This implies that
{T 0 , . . . , T n1 } is a basis for W , hence we get the desired inequality.
P n
Fall 2009 #8. For a matrix A Mn (R), define eA := n=0 An! . Let v0 Rn . Prove that the function
v : R Rn given by v(t) = eAt v0 solves the linear differential equation v 0 (t) = Av(t) with the initial
condition v(0) = v0 . Explain precisely which theorems in calculus you are using in your proof and why they
are applicable.

Note that
eA(t+h) eAt eAh 1Fn At
= e .
h h
By definition,

eAh 1Fn X 1 An hn X An hn1 X
= = =A+ An hn1 n!.
h n=1
h n! n=1
n! n=2

Now we estimate

X An hn1 X ||A||n |h|n1 X X 1
|| || = ||A|| ||Ah||n1 n! ||A|| ||Ah||n = ||A||||Ah|| ,
n=2
n! n=2
n! n=2 n=1
1 ||Ah||

which approaches 0 as |h| 0. Thus

eA(t+h) eAt eAh 1Fn


 
lim = lim eAt = AeAt .
|h|0 h |h|0 h

Fall 2009 #11. Fall 2010 #6. (i) State the Cayley-Hamilton Theorem for matrices A Mn (C).
(ii) Prove it directly for diagonalizable matrices.
2
(iii) Identify Mn (C) isomorphic to Cn though some (say, the natural) linear isomorphism. Through this
identification Mn (C) becomes a metric space with the Euclidean metric. Fact: The subset of diagonalizable
2
matrices in Mn (C) (isomorphic to Cn ) is dense. Use this fact, together with part (ii), to prove the Cayley-
Hamilton Theorem.

(i) A (A) = 0.

(ii) A = P 1 DP . A (t) = D (t) = (t 1 ) (t n ). Note

A (A) = D (A) = D (D) = (D 1 ) (D n ) = 0,

since D i has the ith row all 0.

(iii) The isomorphism comes by taking each entry. Take A Mn (C). Let An be a sequence of diagonal-
izable matrices converging to A. It follows that Akn converges to Ak and thus p(An ) converges to p(A) for
any polynomial P . By (ii), An (An ) = 0. We estimate

||A (A)|| ||A (A) A (An )|| + ||A (An ) An (An )|| + ||An (An )||

93
UCLA Basic Exam Problems and Solutions Brent Woodhouse

= ||A (A) A (An )|| + ||A (An ) An (An )||.


The first term is small since A is continuous. The second is small since det is continuous.

Fall 2009 #12. Let V be an n ( 2)-dimensional vector space over C with a set of basis vectors e1 , . . . , en .
Let T be a linear transformation of V satisfying T (e1 ) = e2 , . . . , T (en1 ) = en , T (en ) = e1 .
(i). Show that T has 1 as an eigenvalue and write down an eigenvector with eigenvalue 1. Show that up
to scaling it is unique.
(ii) Is T diagonalizable? (Hint: calculate the characteristic polynomial.)

(i) e1 + + en . Fairly straightforward to show unique up to scaling.

(ii) Yes. The characteristic polynomial is tn 1. Clearly the minimal polynomial is also tn 1. Both of
these have n distinct complex eigenvalues, thus T is diagonalizable over C.

Spring 2009 #2. Compute the norm of the matrix


 
2 1
A= .
0 3
That is, determine the maximum value of the length of Ax over all unit vectors x.
q p p
We wish to maximize f (x, y) = ||A[x, y]t || = (2x + y)2 + ( 3y)2 = 4x2 + 4xy + 4y 2 = 2 x2 + xy + y 2 =

2 1 + xy given x2 + y 2 = 1. Now (x y)2 0, so 2xy x2 + y 2 = 1. Note that x = y = 12 achieves this
maximum for xy, hence the maximum value of the length of Ax is
p
2 3/2 = 6.

Spring 2009 #3. We wish to find a quadratic polynomial P obeying

P (0) = , P 0 (0) = , P (1) = , andP 0 (1) =

where 0 denotes differentiation.


(a) Find a minimal system of linear constraints on (, , , ) such that this is possible.
(b) When the constraints are met, what is P ? Is it unique? Explain your answer.

(a) P (t) = at2 + bt + c. The conditions imply c = , b = . Thus

P (t) = at2 + t + .

From P (1) = ,
= a + + ,
so a = and
P (t) = ( )t2 + t + .
Finally, the condition P 0 (1) = implies

2( ) + = .

Thus we get the constraint


2 + 2 + = 0.

(b) If this constraint is met, P is unique and given by

P (t) = ( )t2 + t + .

94
UCLA Basic Exam Problems and Solutions Brent Woodhouse

The matrix equation for a, b, c in terms of , , involves an invertible matrix, so P is unique.

Spring 2009 #5. Compute eAt when


2 1 5
A= 0 1 3 .
1 0 1
Recall that eAt is defined by the property that a smooth vector function x(t) obeys:

dx
(t) = Ax(t) if and only if x(t) = eAt x(0).
dt

The characteristic polynomial of A is 2 (4 ), so A3 = 4A2 . Thus for all k 2,

Ak = 4k2 A2 .

Then by the series expansion for eAt ,

t 2 A2 t3 A3 t 2 A2 t3 4A2
eAt = In + tA + + + = In + tA + + +
2 3! 2 3!
t 2 A2 A2 4t A2 4t
= In + tA + + (e (1 + 4t + 8t2 )) = In + tA + (e 1 4t).
2 16 16
We can use uniqueness of the solution to the differential equation with initial condition to verify the
series formula for etA .

Spring 2009 #8. (a) Show that


(A|B) = tr(AB t )
defines an inner product on Mnn (R). More precisely, show that it obeys the axioms of an inner product.
(b) Given C Mnn (R), we define a linear transformation

C : Mnn (R) Mnn (R)

by
C (A) = CA AC.
Compute the adjoint of C . Check that when C is symmetric, then C is self-adjoint.
(c) Show that whatever the choice of C, the map C is not onto.

(a) Note
n
X
(A, A) = tr(AAt ) = tr(( aik ajk )ij )
k=1
n
X n
X n
X
= a1k a1k + + ank ank = a2ij 0,
k=1 k=1 i,j=1

with equality if and only if aij = 0 for all i, j, in which case, A = 0.


Clearly (A + B, C) = (A, C) + (B, C) and (kA, B) = k(A, B).
Finally, (A, B) = tr(AB t ) = tr((AB t )t ) = tr(BAt ) = (B, A). Thus (, ) is an inner product.

(b) We need that for all A, B,

tr((CA AC)B t ) = tr(A(C B)t ).

95
UCLA Basic Exam Problems and Solutions Brent Woodhouse

The left hand side is

tr(CAB t ) tr(ACB t ) = tr(AB t C) tr(ACB t ) = tr(A(B t C CB t )) = tr(A(C t B BC t )t ).

Thus the adjoint is


C (B) := C t B BC t = C t (B).
Hence when C is symmetric, C (B) = C t (B) = C (B), so C is self-adjoint.

(c) If C = 0, C = 0, so it is clearly not onto. Otherwise, note C (C) = 0, so the kernel of C is at least
one-dimensional. Hence by the Dimension Theorem, C cannot be onto.

Spring 2009 #9. Let us say that a real symmetric n n matrix A is a reflection if A2 = In and

rank(A In ) = 1.

Given distinct unit vectors x, y Rn show that there is a reflection with Ax = y and Ay = x. Moreover,
show that the reflection A with these properties is unique.

For any orthogonal matrix P , A is a reflection if and only if P AP 1 is a reflection, thus we can shift to
any orthonormal basis to solve the problem.
Let w1 = 21 (x+y) and w2 = 12 (xy), then w1 = w1 /||w1 || and w2 = w2 /||w2 ||. It follows that w1 w2 = 0.
Extend this to some orthonormal basis w1 , . . . , wn . Note x = ||w1 ||w1 + ||w2 ||w2 and y = ||w1 ||w1 ||w2 ||w2 .
With respect to this basis, we take
For uniqueness, note that if Rx = y and Ry = x, then R(x y) = (x y). Also, since rank(R In ) = 1,
ker(R In ) = (x y) . Thus for all v (x y) , Rv = v. Thus Rwi = Awi for all i, so R = A.

Spring 2009 #11. (a) Explain the following (overly informal) statement:
Every matrix can be brought to Jordan normal form; moreover the normal form is essentially unique.
No proofs are required; however, all statements must be clear and precise. All required hypotheses must
be included. The meaning of the phrases brought to, Jordan normal form, and essentially unique must
be defined explicitly.
(b) Define the minimal polynomial of a matrix. How may it be determined for a matrix in Jordan normal
form?

(a) More precisely, every n n with entries in C is similar to an essentially unique matrix in Jordan
normal form.
Two matrices A and B are said to be similar if there exists an invertible matrix P such that B = P AP 1 .
A Jordan block Jm is an m m matrix with diagonal entries , superdiagonal entries 1, and all other
entries 0. A matrix is in Jordan normal form if it has Jordan blocks along the diagonal, and is zero elsewhere.
Given a matrix A, its Jordan normal form is unique up to reordering of the Jordan blocks. In other
words, two matrices in Jordan normal form are similar if and only if they are composed of the same Jordan
blocks.

(b) The minimal polynomial of a matrix A is the unique monic polynomial p of minimal degree that
satisfies p(A) = 0. Clearly the minimal polynomial of a Jordan block Jm is (t )m . For a matrix in Jordan
normal form, this implies the minimal polynomial is
s
Y
p= (t i )Mi ,
i=1

where {i } is the set of distinct eigenvalues of A, and Mi is the maximal size of a i -Jordan block in A.

Fall 2010 #5. Prove or disprove: For any two subsets S and S 0 of a vector space V ,

96
UCLA Basic Exam Problems and Solutions Brent Woodhouse

(a) span(S) span(S 0 ) = span(S S 0 ),


(b) span(S) + span(S 0 ) = span(S S 0 ).

(a) False. Take V = R, S = {0, 1} and S 0 = {0, 2}. Then

span(S) span(S 0 ) = R R = R 6= {0} = span(S S 0 ).

(b) True. For any a span(S) and b span(S 0 ), since span(S S 0 ) is a vector space, a + b span(S S 0 ).
Thus span(S) + span(S 0 ) span(S S 0 ). Conversely, note that span(S) + span(S 0 ) is a vector space which
contains S S 0 , since 0 span(S) and 0 span(S 0 ). Thus as the smallest subspace containing S S 0 ,
span(S S 0 ) span(S) + span(S 0 ).

Fall 2010 #7. Let V and W be inner product spaces over C such that dim(V ) dim(W ) < . Prove
that there is a linear transformation T : V W satisfying

(T (v)|T (v 0 ))W = (v|v 0 )V

for all v, v 0 V .

We want to construct an isometry from V to a subspace of W . Let v1 , . . . , vn be an orthogonal basis for


V and w1 , . . . , wm be a basis for W . By the assumptions, n m. Define T (vi ) = wi for 1 i n. Then for
any v, v 0 V , we can write v = a1 v1 + + an vn and v 0 = b1 v1 + + bn vn so that

(T (v)|T (v 0 ))W = (a1 w1 + + an wn |b1 w1 + + bn wn )W = a1 b1 + + an bn

= (a1 v1 + + an vn |b1 v1 + + bn )V = (v|v 0 )V .

Fall 2010 #8. Let W1 and W2 be subspaces of a finite dimensional inner product space V . Prove that
(W1 W2 ) = (W1 ) + (W2 ) .

Let v (W1 W2 ) . Write v = v1 + v2 , where v1 = projW1 v. Then v1 W1 and v v1 W1 . For any


w W2 ,
(v2 |w) = (v v1 |w).
Write w = w1 + w2 , where w1 W1 and w2 W1 . Then it follows that the right hand side is zero. Hence
v2 W2 , and v (W1 ) + (W2 ) .
Conversely, suppose v1 + v2 (W1 ) + (W2 ) . Then for any w1 W1 , (v1 |w1 ) = 0 and for any w2 W2 ,
(v2 |w2 ) = 0. Thus for any w W1 W2 , w W1 and w W2 , hence

(v1 + v2 |w) = (v1 |w) + (v2 |w) = 0 + 0 = 0.

Hence v1 + v2 (W1 W2 ) .

Fall 2010 #9. Consider the following iterative method

xk+1 = A1 (Bxk + c)

where c is the vector (1, 1)t and A and B are the matrices
   
2 0 2 1
A= ,B = .
0 2 1 2

(a) Assume the iteration converges; to what vector x does the iteration converge?
(b) Does this iteration converge for arbitrary initial vectors x0 ?

97
UCLA Basic Exam Problems and Solutions Brent Woodhouse

(a) We must solve the matrix equation


1 1
   
1 2 2
L= 1 L+ 1 .
2 1 2

Thus L1 = L1 + 21 L2 + 12 , so L2 = 1 and likewise, L1 = 1. Hence it converges to L = (1, 1).

(b) No, note if x0 has positive entries, then xk has positive entries for all k. Thus it cannot converge to
1.

n
Spring 2010 #1.P Let u21 , . . . , un be an orthonormal basis of R and let y1 , . . . , yn be a collection of vectors
n
in R satisfying i ||yi || < 1. Prove that the vectors u1 + y1 , . . . , un + yn are linearly independent.
Pn
Define T (uk ) = yk and extend it by linearity. Let x Cn and write x = j=1 j uj . Then
1/2
n
X n
X Xn Xn Xn
||T x|| = || j yj || |j |||yj || |j |2 ||yj ||2 = ||yj ||2 ||x||.
j=1 j=1 j=1 j=1 j=1

Hence
n
X
||T || ||yj ||2 < 1.
j=1
P
It follows that I T is invertible with inverse i=0 T i . Since the uk form a basis and (I T )(uk ) = uk + yk ,
and invertible linear maps take bases to bases, we have that u1 + y1 , . . . , un + yn form a basis.

Spring 2010 #5. Let A, B be two n n complex matrices which have the same minimal polynomial M (t)
and the same characteristic polynomial P (t) = (t 1 )a1 (t k )ak , where the i are distinct. Prove that
if P (t)/M (t) = (t 1 ) (t k ), then these matrices are similar.

Using Jordan canonical form, there must be a Jordan block Jaii 1 and J1i for each i. Hence the two
matrices have the same Jordan canonical form, up to rearranging the Jordan blocks, hence they are similar.
 
4 4
Spring 2010 #6. Let A = .
1 0
(i) Find Jordan form J of A and a matrix P such that P 1 AP = J.
(ii) Compute A100 and J 100 .
(iii) Find a formula for an , when an+1 = 4an 4an1 and a0 = a, a1 = b.

(i) A has minimal polynomial and characteristic polynomial (t 2)2 . Thus its Jordan form is
 
2 1
J= .
0 2

Since J is the representation of A in the basis consisting of columns of P , we need A(v1 ) = 2v1 and
A(v2 ) = v1 + 2v2 . Picking v2 = (1 0)t , we obtain
 
2 1
P = (v1 v2 ) = .
1 0

2n n2n1
 
(ii) J n = . Higher entries get (nCk)nk .
0 2n

98
UCLA Basic Exam Problems and Solutions Brent Woodhouse

(iii) Straightforward.

Fall 2011 #7. Let f : R Mnn be a continuous function. Show that the function g(t) = rank(f (t)) is
lower semi-continuous, meaning that if a sequence tn converges to t then g(t) lim inf n g(tn ). Is g always
continuous?

Recall that if A Mn (R) \ {0} then rank(A) can be computed to be the largest k such that there exists
a k k submatrix B of A such that det(B) 6= 0.
Let f : R Mn (F) be a continuous function. Therefore, for each 1 k n and selection 1 i1 < i2 <
< ik n and 1 j1 < j2 < < jk n, the function fk,i1 ,...,ik ,j1 ,...,jk : R Mk (R) defined as the
k k submatrix of f (t) with rows i1 , . . . , ik and columns j1 , . . . , jk is a continuous function. Therefore the
composition hk,i1 ,...,ik ,j1 ,...,jk of the determinant with this function is continuous.
Fix t R and let (tn )n1 R be such that limn tn = t. If f (t) = 0 there is nothing to prove.
Otherwise, f (t) 6= 0. Then taking k = rank(f (t)) there exists a selection 1 i1 < i2 < < ik n and
1 j1 < j2 < < jk n such that hk,t1 ,...,tk ,j1 ,...,jk (t) 6= 0. Since h... is continuous, there exists an N N
such that h (tn ) 6= 0 for all n N . Hence

g(tn ) = rank(f (tn )) k = rank(f (t)) = g(t)

for all n N . Hence g(t) lim inf n g(tn ), as desired.


To see that g is not always continuous, take f (t) = tIn . Then for t 6= 0, g(t) = n, but g(0) = 0.

Fall 2011 #8. Assume that a complex matrix A satisfies ker((A I)) = ker((A I)2 ) for all C.
Show from first principles (i.e., without using the theory of canonical forms) that A must be diagonalizable.

Suppose for the sake of contradiction that A (t) has a multiple root . Then

0 = A (A) = (A I)2 (q(A))(v)

for all v. But since ker((A I)2 ) = ker((A I)), this means

(A I)(q(A))(v) = 0

for all v. But this contradicts the definition of the minimal polynomial. Hence the minimal polynomial of A
has no repeated roots. This implies that A is diagonalizable.

Fall 2011 #10. Let A be a 3 3 real matrix with A3 = I. Show that A is similar to a matrix of the form

1 0 0
0 cos() sin()
0 sin() cos()

for some (real) . What values of are possible?

Note eigenvalues satisfy 3 = 1. Thus = 1, e2/3 or e4/3 . And if is an eigenvalue, then so is . Also,
the product of the eigenvalues is det(A) = 1. Hence all the eigenvalues are 1 or 1 = 1, 2 = e2/3 , and
3 = e4/3 . In the first case, the characteristic polynomial is t3 1, so by the Cayley-Hamilton Theorem,
(A I)3 = 0. Using A3 = I, this implies A2 = A, so A = 0 or A = 1. Clearly A = 1, and this is in the
desired form for = 0.
In the second case, defining = 2/3, the second and third eigenvalues are ei and ei . Let v1 , v2 , v3
be eigenvectors of A corresponding to 1 , 2 , 3 . We can write

1 0 0
A = [v1 v2 v3 ] 0 2 0 [v1 v2 v3 ]1 .
0 0 3

99
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Now conjugate the lower two rows by  


1 1 i
U=
2 1 i
and note U V is real.

Fall 2011 #11. Suppose V, W , and U are finite dimensional vector spaces over R and that T : V W
and S : W U are linear operators. Suppose further that T is one-to-one, S is onto, and S T = 0. Prove
that ker(S) image(T ) and that
dim(V ) + dim(W ) dim(U ) = dim(ker(S)/image(T )).

From S T = 0, we obtain ker(S) image(T ). We know


dim(ker(S)/image(T )) = dim(ker(S)) dim(image(T )).
By the dimension theorem, since T is one-to-one
dim(V ) = dim(ker(T )) + dim(image(T )) = dim(image(T )).
By the dimension theorem, since S is onto,
dim(W ) = dim(ker(S)) + dim(image(S)) = dim(ker(S)) + dim(U ).
Putting these together directly yields the desired equation.

Spring 2011 #1. Let A be a 3 by 3 matrix with complex entries. Consider the set of such A that satisfy
tr(A) = 4, tr(A2 ) = 6, and tr(A3 ) = 10. For each similarity (i.e. conjugacy) class of such matrices, give one
member in Jordan normal form. The following identity may be helpful:
If b1 = a1 + a2 + a3 , b2 = a21 + a22 + a23 , and b3 = a31 + a32 + a33 , then
6a1 a2 a3 = b31 + 2b3 3b1 b2 .

We deduce from the algebra that the eigenvalues of A are 1, 1, and 2. Thus the Jordan normal form must
have a 1 1 block with eigenvalue 2 and a 2 2 block with diagonal entries 1 and either a 0 or 1 in the
upper right.

Spring 2011 #3. Show that for any Hermitian (i.e. self-adjoint) operator H on a finite dimensional inner
product space there exists a unitary operator U such that U HU is diagonal. (You may use a basis if you
need to!)

The spectral theorem gives an orthonormal basis such that H is diagonal with respect to that basis; the
change of basis matrix consists of the orthogonal basis vectors and hence is unitary.

Spring 2011 #4. Let A be an n n real matrix. Define an LU decomposition of A. State a necessary and
sufficient condition on A for the existence of such a decomposition. Suppose we normalize the decomposition
by requiring that the diagonal entries of L are 1. Show that in this case, if the LU decomposition exists,
then it is unique. Give the LU decomposition of the matrix
 
4 3
.
6 3

If the principal minors are invertible, the LU decomposition exists. Suppose A = L1 U1 = L2 U2 . Then
L1 1
2 L1 = U2 U1 , where the left hand side is unit lower triangular and the right hand side is upper triangular.
It follows that both sides are the identity, so L1 = L2 and U1 = U2 .

100
UCLA Basic Exam Problems and Solutions Brent Woodhouse

    
4 3 1 0 4 3
= 3 .
6 3 2 1 0 32

Fall 2012 #7. Let A be an invertible n n matrix with entries in C. Suppose that the set of powers An
of A, for n Z, is bounded. Show that A is diagonalizable.

Since Ak = k v for an eigenvector v, it is clear that if Ak is bounded for all integers k, then || = 1.
Suppose for the sake of contradiction that A is not diagonalizable. Then A has a Jordan normal form with a
block of size at least 2, so there exist unit vectors v1 , v2 such that Av1 = v1 and Av2 = v1 + v2 . It follows
that Ak v2 = kk1 v1 + k v2 , so ||Ak || |Ak v2 | k 1. Since k was arbitrary, the set of powers of A is not
bounded, a contradiction.

Fall 2012 #8. Let H be an n n Hermitian matrix with non-zero determinant. Use H to define an
Hermitian form [, ] by the formula: for x, y Cn (column vectors!), [x, y] = xt Hy. Let W be a complex
subspace of Cn such that [w1 , w2 ] = 0 for all w1 , w2 W . Show that dim(W ) n/2. Give also for each n
an example of an H for which dim(W ) = n/2 if n is even or dim(W ) = (n 1)/2 if n is odd.

Select an orthonormal basis e1 , . . . , en so that H is diagonal with respect to this basis. Then the diagonal
entries must be non-zero real numbers.
With respect to the orthonormal basis,

[w, w] = 1 |w1 |2 + 2 |w2 |2 + + n |wn |2 .

Pair up positive diagonal entries with negative to get the example H and W . We can always form such a
basis, and then nothing can be added to it.

Fall 2012 #10. Let A be a linear operator on a four dimensional complex vector space that satisfies the
polynomial equation P (A) = A4 + 2A3 2A I = 0. Let B = A + I, and suppose dim(range(B)) = 2.
Finally, suppose that |tr(A)| = 2. Give a Jordan canonical form of A.

Factor P (A) = (A I)(A + I)3 and note this must be the characteristic polynomial of A. Thus the
eigenvalues of A are 1 and 1. Since dim(range(B)) = 2, the dimension theorem implies dim(ker(A
(1)I)) = dim(ker(B)) = 2, thus the geometric multiplicity of 1 is 2. From the trace condition, 1 must
have multiplicity 3 and 1 must have multiplicity 1. Hence the Jordan form has a 2 2 block with 1 on the
diagonal, and 1 on the superdiagonal, as well as a 1 1 block with 1, and a 1 1 block with 1.

Spring 2012 #7. Let F be the finite field of p elements, let V be an n-dimensional vector space over F
and let 0 k n. Compute the number of invertible linear maps V V . It is acceptable if your solution
is a lengthy algebraic expression, as long as you explain why it is correct.

The first column can be anything but the zero vector, the second can be anything but multiples of the
first column, and generally the kth column can be any vector not in the span of the first k 1 columns. This
gives
(pn 1)(pn p) (pn pn1 )
invertible linear maps. Since the first k 1 columns are linearly independent, we are not overcounting the
number of possibilities for the kth column.

Spring 2012 #8. Let A be an n n complex matrix. Prove that there are two sequences of matrices
{Bi } and {Li }, such that Li are diagonal with distinct eigenvalues, and Bi Li Bi1 A as i . Here by
convergence of matrices we mean convergence in all entries.

101
UCLA Basic Exam Problems and Solutions Brent Woodhouse

First, write A as V U V 1 , where U is upper triangular using Jordan canonical form. Then by changing
the diagonal entries of U by less than 1/i, we obtain a matrix Ti with distinct entries along the diagonal.
Then Ti has distinct eigenvalues, so there exists some Ci with Ti = Ci Li Ci1 , where Li is diagonal. It follows
that
(V Ci )Li (V Ci )1 V U V 1 = A
as i .

Spring 2012 #9. Let a1 = 1, a2 = 4, an+2 = 4an+1 3an for all n 1. Find a 2 2 matrix A such that
   
n 1 an+1
A =
0 an

for all n 1. Compute the eigenvalues of A and use them to determine the limit

lim (an )1/n .


n

 
4 3
A= .
1 0
The eigenvalues of A are 1 and 3. Thus we can diagonalize A and limn (an )1/n = 3 since it picks out the
largest eigenvalue.

Spring 2012 #10. Let A be a complex n n matrix. State and prove under which conditions on A the
following identity holds:
det(eA ) = exp(tr(A)).
Here the matrix exponentiation is defined via the Taylor series. You can assume known that this sum
converges (entrywise) for all complex matrices A.

This holds for any matrix A. Using Jordan normal form, there exists an invertible matrix V such that
1
U = V AV 1 is upper triangular. It is straightforward to show eV AV = V eA V 1 using the definitions.
Then 1
det(eA ) = det(eV U V ) = det(V eU V 1 ) = det(eT )
and 1 1
etr(A) = etr(V U V )
= etr(U V V )
= etr(T ) .
Thus it suffices to show that det(eT ) = etr(T ) for upper triangular T . But this follows immediately since the
determinant is the product of the diagonal entries of eT , which are ei .

Spring 2012 #11. (a) Find a polynomial P (x) of degree 2, such that P (A) = 0, for
 
1 3
A= .
4 2

(b) Prove that such P (x) is unique, up to multiplication by a constant.

(i) P (x) = x2 3x 10.

(ii) Suppose there exist two monic quadratic polynomials P and Q with P (A) = 0 = Q(A). Then
(P Q)(A) = 0, and P Q is either 0 or a first degree polynomial. But clearly no first degree polynomial
evaluated at A is 0, hence P Q = 0, so P = Q.

102
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Spring 2012 #12. Recall that the quadratic forms Q1 (x, y) and Q2 (x0 , y 0 ) are said to be equivalent if
they are related by a non-singular change of coordinates (x, y) 7 (x0 , y 0 ). Decide whether Q1 = xy and
Q2 = x2 + y 2 are equivalent over C and whether they are equivalent over R. If not, give a proof. If yes, find
the matrix for change of coordinates.

Send x to x0 = x + iy and y to y 0 = x iy. Then x0 y 0 = x2 + y 2 . This is a non-singular change


of coordinates, so the quadratic forms are equivalent over C. They are not equivalent over R. Suppose
a, b, c, d R with (ax + by)(cx + dy) = acx2 + bdy 2 + (ad + bc)xy. If this equals x2 + y 2 , then ad + bc = 0, and
ac = bd = 1. Thus a and c are either both positive or both negative, and b and d are either both positive
or both negative. But this implies ad and bc are both positive or both negative, so they cannot sum to 0, a
contradiction.

Spring 2013 #6. (a) Prove that diagonalizable matrices are dense in the set of all n n matrices with
complex entries.
(b) Are diagonalizable matrices with real entries dense in the set of all n n matrices with real entries?

(a) See Spring 2012 #8.


 
0 1
(b) No. For example, , which has eigenvalues i cannot be approximated by diagonalizable
1 0
matrices. The roots of a polynomial are continuous functions of its coefficients, and the coefficients of the
characteristic polynomial are continuous functions of the entries of the matrix, thus the eigenvalues of a
matrix are continuous functions of its entries, so any far-enough-along approximating matrices cannot be
diagonalizable in the reals, because the diagonal matrix would need to have complex entries.

Spring 2013 #7. (a) Show that the series exp(A) = I + A + A2 /2! + converges to a limit in the usual
sense of convergence of matrices (converge entry by entry).
(b) Show that the series ln(I + A) = A A2 /2 + A3 /3 + + (1)n+1 An /n + converges if the operator
norm of A is less than one.
(c) Show that exp(ln(I + A)) = I + A if the operator norm of A is less than 1.
P 1
(a) Since ||AB|| ||A||||B||, we have ||Ak || ||A||k . By the ratio test, k=0 k! ||A||k converges, and the
A A
partial sums of e have norm less than the partial sums of this series, so e converges. This relies on the
completeness of the normed space of matrices.

(b) Note
X (1)k+1 X1 X
|| Ak || ||Ak || ||A||k < ,
k k
k1 k1 k1

where the final series is geometric since ||A|| < 1. Technically we should estimate the partial sums, then
conclude that the series converges.

(c) Too long to work out - not worth it.

Spring 2013 #8. Let T be a linear transformation from a finite-dimensional vector space V with an inner
product to a finite dimensional vector space W also with an inner product (the dimension of W can be
different from the dimension of V here).
(a) Define the adjoint T : W V .
(b) Show that if matrices are written relative to orthonormal bases of V and W , then the matrix of T
is the transpose of the matrix of T .

103
UCLA Basic Exam Problems and Solutions Brent Woodhouse

(a) Petersen takes a basis ei for V and sets


n
X
T y = (y|T (ej ))W ej .
i=1

Then it follows that


(Lx|y) = (x|L y).
Uniqueness: suppose (x|K1 y) = (x|K2 y) for all x, y. Then

0 = (x|K1 y K2 y).

Then taking x = K1 y K2 y, we get K1 y = K2 y.


(b) Let e1 , . . . , en be an orthonormal basis of V and f1 , .P
. . , fm be an orthonormal basis of W . Suppose
m
T with respect to these bases has entries aij . Thus T (ei ) = i=1 aij fi . It follows that
n
X n
X
T fj = (fi |T (ei ))ei = aij ei .
i=1 i=1

Hence the matrix representation of T with respect to these bases has entries aji , so the matrix of T is the
transpose of the matrix of T .

Spring 2013 #10. Denote by G the set of real 4 4 upper triangular matrices with 1s on the diagonal.
Fix
1 1 1 1
0 1 1 1
M = 0 0
.
1 1
0 0 0 1
Denote by C the set of matrices in G commuting with M .
(a) Prove that C is an affine subspace in the space R16 of all 4 4 real matrices. S is an affine subspace
of a vector space V if there is a vector w V such that S 0 = {v w : v S} is a subspace of V . The
dimension of S is defined to be the dimension of S 0 .
(b) Find the dimension of C.

By brute force, we check that matrices in C take the form



1 a b c
0 1 a b
,
0 0 1 a
0 0 0 1

where a, b, c are arbitrary. Taking w = I4 , C w is a subspace of R16 , so C is an affine subspace of R16 .

(b) Since a, b, c are arbitrary, C w has dimension 3, so C is an affine subspace of dimension 3.

Fall 2011 #9. Let V be a finite dimensional inner product space, and let L : V V be a self-adjoint linear
operator. Let and be given. Suppose there is a unit vector x V such that

||L(x) x|| .

Prove that L has an eigenvalue so that | | .

By the spectral theorem, there exists an orthonormal basis (ei ) consisting of eigenvectors of L with
eigenvalues i . Write
Xn
x= (x ei )ei .
i=1

104
UCLA Basic Exam Problems and Solutions Brent Woodhouse

Then X X
L(x) x = ((x ei )i (x ei ))ei = (i )(x ei )ei .
Now X X
|i ||x ei |2 = ||(i )(x ei )||2 = ||L(x) x||2 2 .

|x ei |2 = 1, hence there must exist i with


P
Since |x| = 1,

|i |2 2 .

Thus |i | .

Spring 2010 #2. Let A be an n n real symmetric matrix and let 1 . . . n be the eigenvalues of A.
Prove that
k = max min (Ax|x),
U :dim(U )=k xU :||x||=1

where (, ) denotes the usual scalar product in Rn and the maximum is taken over all k-dimensional subspaces
of Rn .

As in Spring 2008 #12, we can show minxU :||x||=1 (Ax|x) is an eigenvalue for any subspace U . In fact it
is the least eigenvalue with an eigenvector in U . Now any k-dimensional subspace contains eigenvectors for
k eigenvalues i . Hence the max over all U is k .
P
Spring 2010 #4. (i) Let A = (ai,j ) be an n n real symmetric matrix such that i,j ai,j xi xj 0 for
every vector (x1 , . . . , xn ) Rn . Prove that if tr(A) = 0, then A = 0.
(ii) Let T be a linear transformation in the complex finite dimensional vector space V with a positive
definite Hermitian inner product. Suppose that T T = 4T 3I, where I is the identity transformation.
Prove that T is positive definite Hermitian and find all possible eigenvalues of T .

(i) Suppose tr(A) = 0. Using x = ei , we see aii 0. Thus the diagonal entries are 0. Then using ei + ej
and ei ej for i 6= j, as well as the symmetry of A, we see aij 0 and aij 0, so A = 0.
(ii) ?

Spring 2011 #5. Let A be an n by n matrix with real entries, and let b be an n by 1 column vector with
real entries. Prove that there exists an n by 1 column vector solution x to the equation Ax = b if and only
if b is in the orthocomplement of the kernel of the transpose of A.

There exists x such that Ax = b holds if and only if b is in the column space of A, which holds if and
only if b is in the row space of AT . Now b is in the row space of AT if and only if every element in the kernel
of AT is orthogonal to b, which holds if and only if b is in the orthocomplement of the kernel of AT .

Winter 2006 #9. Let A M3 (R) be invertible and satisfy A = At and det A = 1. Prove that A has one
as an eigenvalue.

105

You might also like