You are on page 1of 3

Some selected proof-based homework problems

Suppose A is an n n matrix with only one eigenvalue and n linearly independent eigenvectors. Show that every vector in Rn is an eigenvector of A. Solution: Say the only eigenvalue of A is , and consider the eigenspace for . We know that the eigenspace is a subspace of Rn . Because is the only eigenvalue, every eigenvector has eigenvalue . So since A has n linearly independent eigenvectors, the -eigenspace contains n linearly independent vectors. But the only subspace of Rn which can contain n linearly independent vectors is Rn itself (as every set of n linearly independent vectors spans Rn ). Therefore the -eigenspace is all of Rn . But this means every vector in Rn is an eigenvector of A with eigenvalue . Note: In fact, the argument tells us that A acts by multiplying every vector by . But this means that A = I (where I is the identity matrix). Suppose that A is an invertible matrix. Show that 0 is not an eigenvalue of A. Solution 1: By denition, 0 is an eigenvalue precisely when there is a nonzero vector v such that A v = 0 (since 0v = 0). But if A is invertible, then Av = 0 only has the trivial solution v = 0. Reason: we have v = (A1 A)v = A1 (Av ) = A1 0 = 0. So, if A is invertible, then 0 is not an eigenvalue. Solution 2: A is invertible precisely when det(A) = 0. By denition, the characteristic polynomial is p(t) = det(A tI ). In particular, setting t = 0 shows that det(A) = p(0). Since they are equal, we see that det(A) = 0 if and only if p(0) = 0. But the roots of the characteristic polynomial are precisely the eigenvalues. So p(0) = 0 is exactly the same as saying that 0 is not a root of p(t), and hence that 0 is not an eigenvalue. Note: Both solutions work both ways, and show that if A is not invertible, then 0 is an eigenvalue. Prove that if A and B are n n matrices and there exists an invertible n n matrix P with B = P 1 AP , then det(A) = det(B ). Also show that if either A or B is invertible, then the other is invertible, and B 1 = P 1 A1 P . Solution:
Taking the determinant of both sides of B = P 1 AP yields det(B ) = det(P 1 ) det(A) det(P ). Since determinants are scalars we can rearrange them to get det(B ) = det(A) det(P 1 ) det(P ). Using multiplicativity of determinants again we have det(B ) = det(A) det(P 1 P ) = det(A) det(I ). Since the determinant of the identity matrix is 1, we obtain det(B ) = det(A), as desired. The equality of the determinants means that det(B ) = 0 precisely when det(A) = 0. But since a determinant is nonzero exactly when that matrix is invertible, we see that A is invertible if and only if B is invertible. Thus, if either is invertible, the other is as well. Finally, if both are invertible, then we can write B 1 = (P 1 AP )1 = P 1 A1 (P 1 )1 , since the inverse of a product is the product of the inverses in the opposite order. Since (P 1 )1 = P we obtain B 1 = P 1 A1 P , as desired.

Prove that if is an eigenvalue of A, then n is an eigenvalue of An for every n > 1. Solution 1: Saying is an eigenvalue of A means that there is a nonzero vector v such that A v = v . If we multiply both sides by A then we have A(A v ) = A(v ). Since is a scalar, A(v ) = (A v ) = (v ) = 2 v , where in the middle we used the fact that A v = v . Therefore we have A2 v = 2 v . If we multiply both sides by A again, then we can do the same rearrangement to see that A3 v = 3 v . By continuing this process, we will eventually end up with An v = n v , for each n > 1. But this says precisely that v is an eigenvector of An with eigenvalue n . In particular, n is an eigenvector of An . Note: To be fully rigorous, one should set this up as an induction on n. The base case n = 1 is immediate, since A1 v = 1 v . The inductive step is: given An1 v = n1 v , multiply both sides by A to get An v = A(n1 v ) = n1 (Av ) = n1 (v ) = n v . Solution 2: Saying is an eigenvalue of A means that det(A I ) = 0. We want to see that det(An n I ) = 0, since this is the same as saying that n is an eigenvalue of An . We can factor An n I as (A I ) An1 + An2 (I ) + An3 (I )2 + + A(I )n2 + (I )n1 . If we write B = An1 +An2 (I )+An3 (I )2 + +A(I )n2 +(I )n1 , then An n I = (AI )B . Taking determinants says det(An n I ) = det(A I ) det(B ). But since det(A I ) = 0, we see that det(An n I ) = 0 as well. Therefore, since det(An n I ) = 0, we see that n is an eigenvalue of An . Suppose U and W are subspaces of a vector space V . Let S be the set of all vectors in V of the form u + w, where u is a vector in U and w is a vector in W . Prove that S is also a subspace of V . Solution: We need to show three things: that S contains the zero vector, that S is closed under addition, and that S is closed under scalar multiplication. [S1]: S contains 0. Because U and W are subspaces, we know that each of them contains the zero vector 0. Therefore, S contains 0 + 0 = 0, so S contains the zero vector. [S2]: S is closed under addition. Suppose v1 and v2 are vectors in S . We want to show that v1 + v2 is also in S . By denition of S , we can write v1 = u1 + w1 and v2 = u2 + w2 , where u1 and u2 are in U and w1 and w2 are in W . Then v1 + v2 = u1 + w1 + u2 + w2 . We can rearrange this to read v1 + v2 = (u1 + u2 ) + (w1 + w2 ). But now since U is a subspace, u1 + u2 is also in U . Similarly, w1 + w2 is in W . So we have written v1 + v2 as the sum of a vector in U and a vector in W . Therefore, v1 + v2 is in S , by the denition of S . [S3]: S is closed under scalar multiplication.
Suppose v is a vector in S , and is a scalar. We want to show that v is also in S . By denition of S , we can write v = u + w, where u is in U and w is in W . Then by the distributive law we have v = (u + w) = ( u) + ( w). But now since U is a subspace, u is also in U . Similarly, w is in W . So we have written v as the sum of a vector in U and a vector in W . Therefore, v is in S , by the denition of S .

If v1 , , vk , vk+1 span a vector space V , and vk+1 is a linear combination of v1 , , vk , show that v1 , , vk span V . Solution: The statement that v1 , , vk , vk+1 span V says that any vector w in V can be written as a linear combination of v1 , , vk , vk+1 : say w = a1 v1 + a2 v2 + + ak vk + ak+1 vk+1 . We are also told that vk+1 is a linear combination of v1 , , vk : say as vk+1 = b1 v1 + b2 v2 + + bk vk . Now we can just substitute this expression for vk+1 into the expression for w: this gives w = a1 v1 + a2 v2 + + ak vk + ak+1 (b1 v1 + b2 v2 + + bk vk ). If we expand out the product and collect terms, we obtain the equivalent expression w = (a1 + ak+1 b1 ) v1 + (a2 + ak+1 b2 ) v2 + + (ak + ak+1 bk ) vk . This expresses w as a linear combination of v1 , , vk . Since w was arbitrary, this says every vector in V is a linear combination of v1 , , vk  which is to say, v1 , , vk span V .

Well, you're at the end of my handout. Hope it was helpful. Copyright notice: This material is copyright Evan Dummit, 2012. You may not reproduce or distribute this material without my express permission.

You might also like