Professional Documents
Culture Documents
Vikas Bist
Department of Mathematics
Panjab University, Chandigarh-160014
email: bistvikas@gmail.com
Last revised on March 5, 2018
This text is based on the lectures delivered for the B.Tech students of IIT Bhilai.
These lecture notes are basics of Linear Algebra, and can be treated as a first course
in Linear Algebra. The students are suggested to go through the examples carefully
and attempt the exercises appearing in the text.
C ONTENTS
1 Similarity of matrices 1
5 Unitary similarity 14
6 Bilinear forms 16
1 S IMILARITY OF MATRICES
Also
[(S ◦ T )vj ]B00 = B00 [(S ◦ T ]B [vj ]B .
Hence on equating these last two equations we have the following:
R EMARK 1.1. Note that B0 [I]B is a matrix whose j-th column is the elements
of K when the j-th element of B 0 is expressed as a linear combination of the
elements of B.
P ROPOSITION 1.3. Similar matrices have the same trace and the same
determinants.
1
Recall that a linear operator on V is a linear transformation from V to V.
1
2 EIGENVALUES AND EIGENVETORS
Thus we can define the determinant and the trace of a linear operator T on V
by:
detT := det[T ]B and trT := tr[T ]B ,
where B is any ordered basis of V.
(A − λIn )x = 0
has a non-zero solution. This happens if and only if rank(A − λIn ) < n, that is,
A − λIn is not invertible which is equivalent to det(A − λIn ) = 0. This means
that eigenvalues of A are precisely those values λ for which the matrix (A − λIn )
2
2 EIGENVALUES AND EIGENVETORS
has the zero determinant. In other words eigenvalues are the roots of the following
monic polynomial:
cA (x) := det(xIn − A).
This polynomial is called the characteristic polynomial of A.
Note that eigenvectors corresponding to an eigenvalue are not unique. In fact
if u is an eigenvector, then αu is also an eigenvector corresponding to λ for any
α ∈ K,
E XAMPLE 2.3.
Find the eigenvalues and the corresponding eigenvectors for the
3 1 1
matrix: A = 1 3 1 .
1 1 3
The characteristic equation of A is:
x−3 −1 −1
−1 = (x − 5)(x − 2)2 .
det(xI3 − A) = −1 x − 3
−1 −1 x − 3
This is he row echelon form and so the rank of A − 5I3 is 2 thus nullity is 1. Thus
there is one fundamental solution and that is given by a solution of
x1 + x2 − 2x3 = 0, x1 − x3 = 0.
3
2 EIGENVALUES AND EIGENVETORS
1
Hence solution is x1 = x2 = x3 or u 1 , where u ∈ K. Thus we can take
1
1
eigenvector corresponding to 5 as 1 .
1
Eigenvector corresponding
to 2 is a solution of the system (A − 2I3 )x = 0.
1 1 1
Now A − 2I3 = 1 1 1 and the row echelon form for this matrix is
1 1 1
1 1 1
0 0 0
0 0 0
a matrix of rank 1 and so nullity 2. This means that there are 2 fundamental solutions
that is 2 linearly independent eigenvectors corresponding to eigenvalue 2.
Thus eigenvector corresponding to 2 is satisfying the equation x1 + x2 + x3 = 0
or x3 = −x1 − x2 . Thus any solution of the system is
x1 1 0
x2 = x1 0 + x2 1
−x1 − x2 −1 −1
1 0
The two linearly independent eigenvectors can be taken as 0 and 1 .
−1 −1
4
2 EIGENVALUES AND EIGENVETORS
2 1 −1
E XAMPLE 2.4. Consider the matrix A = 0 2 1 . The characteristic
0 1 2
polynomial is (x − 1)(x − 3)(x − 2) Eigenvalues are 1, 2, 3.
Eigenvector
corresponding
by solving the system (A − I3 )x = 0,
to 1:obtained
1 1 −1 x1 0
that is, 0 1 1 x2 = 0 . Thus x1 +x2 −x3 = 0 and x2 +x3 = 0.
0 1 1 x3 0
2
Thus x2 = −x3 and x1 = 2x3 . An eigenvector corresponding to 1 is −1 .
1
Eigenvector
corresponding
to 2:
obtained by solving the system (A−2I3 )x = 0,
0 1 −1 x1 0
that is, 0 0 1 x2 = 0 . Thus we have x2 = x3 = 0. Hence an
0 1 0 x3 0
1
eigenvector corresponding to 2 is 0 .
0
Eigenvector
corresponding
to by solving the system (A−3I3 )x = 0,
3: obtained
−1 1 −1 x1 0
that is, 0 −1 1 x2 = 0 . Thus we have x2 = x3 and x1 = 0.
0 1 −1 x3 0
0
Hence an eigenvector corresponding to 3 is 1 . •
1
5
2 EIGENVALUES AND EIGENVETORS
The reader can verify that eigenvectors in the Example 2.4 are actually linearly
independent.
Let A ∈ K n×n and λ be an eigenvalue of A. Assume that u1 , . . . , uk ∈ K n are
linearly independent eigenvectors corresponding to λ. Then extend {u1 , . . . , uk } to
a basis of K n : {u1 , u2 , . . . , uk , uk+1 , . . . , un }. Let P ∈ K n×n with j-th column
as uj . Then P is invertible. Then consider the matrix P −1 AP, For j = 1, . . . , k,
the j-th column is
(P −1 AP )ej = (P −1 A)P ej = P −1 Auj = (P −1 (λuj ) = λP −1 uj
= λP −1 P ej = λej .
λIk X
Thus P −1 AP = . It follows that if K n has a basis consisting of
0 Y
eigenvectors of A then P −1 AP is a diagonal matrix.
A matrix A that is similar to a diagonal matrix is called diagonalizable matrix.
6
3 INNER PRODUCT AND ORTHOGONALITY
a0 In + a1 A + · · · + an−1 An−1 + An = 0.
Hence cA (A) = 0. This proves the following important result called the Cayley
Hamilton Theorem.
The Cayley Hamilton Theorem can be used for finding the inverse of a matrix.
The following example illustrates this fact.
4 2 2
E XAMPLE 2.9. Let A = 3 3 2 . Then cA (x) = x3 − 7x2 + 14x − 8.
−3 −1 0
Since the constant term of cA (x) is non-zero, A is invertible. By Cayley-Hamilton
Theorem: A3 − 7A2 + 14A − 8I3 = 0. Now multiplying by A−1 , we have:
A2 − 7A + 14I3 − 8A−1 = 0. Hence
1
A−1 = (A2 − 7A + 14I3 ).
8
7
3 INNER PRODUCT AND ORTHOGONALITY
R EMARK 3.5. We normally deal here with the standard inner product space
Rn . Whenever we say inner product space F n , without mentioning inner
product, we mean the standard inner product space.
A vector x such that kxk = 1 is called a unit vector. The following is an important
inequality. Vectors x, y in an inner product space are called orthogonal if (x, y) =
0.
1
3
E XERCISE 3.6. In R find all vectors orthogonal to 1 .
1
8
3 INNER PRODUCT AND ORTHOGONALITY
Our next result shows that every finite dimensional inner product space has an
orthonormal basis.
1
Then (yl+1 , ui ) = 0 for all i = 1, . . . , l. Now let ul+1 = yl+1 . then clearly,
kul+1 k
{u1 , . . . , ul+1 } is an orthonormal set.
Since from the above equation it follows that xl+1 ∈ hyl+1 , u1 , . . . ul i =
hul+1 , u1 , . . . , ul i, and so hx1 , . . . , xl+1 i ⊆ hu1 , . . . , ul+1 i.
Also from above equation, it follows that
l
!
1 X
ul+1 = xl+1 − (xl+1 , ui )ui .
kyl+1 k
i=1
9
3 INNER PRODUCT AND ORTHOGONALITY
0 1 1
E XAMPLE 3.10. Find an orthonormal basis for R3 from a given basis 1 , 0 , 1 .
1 1 0
0 1 1 0
1
x1 = 1 , x2 = 0 , x3 = 1 . u1 = 2 1 .
√
1 1 0 1
r 1
1 0 1
2 1
y2 = x2 − (x2 , u1 )u1 = 0 − 12 1 = −1/2 . u2 = −
3 12
1 1 1/2 2
1 0 1 1
y3 = x3 − (x3 , u1 )u1 − (x3 , u2 )u2 = 1 − 12 1 − 31 − 12 = 23 1 .
1
0 1 2 −1
1
1
u3 = √ 1 .
2 3 −1
0 q 1 1
Hence orthonormal basis is √12 1 , 23 − 21 , 2√ 1
3
1 .
1
1 −1
2
E XERCISE 3.13. Let V be an inner product space and let X, Y ⊆ V. Prove that:
(i) If X ⊆ Y, then Y ⊥ ⊆ X ⊥ .
(ii) S ⊥ = hSi⊥ .
(iii) If X is a basis of a subspace W of V, then X ⊥ = W ⊥ .
10
3 INNER PRODUCT AND ORTHOGONALITY
x 1
x2 x1 + x2 + x3 + x4 = 0
E XAMPLE 3.15. Let W = : be a subspace of
x 2x1 − x3 − x4 = 0
3
x4
R4 . Find W ⊥ . Also find the orthogonal projection of e1 along W.
⊥
* 1 2 + * 1 2 +
1 0 1
. Thus W ⊥ = y1 = 0
Observe that W = 1 , −1
1 , y2 = −1 .
1 −1 1 −1
These two vectors are already orthogonal. Thus an orthonormal basis for W ⊥ is
{ 12 y1 , √16 y2 }. Now e1 = 41 y1 + 31 y2 .
11
4 ADJOINT OF A LINEAR TRANSFORMATION
Exercise 3.3 shows that if V is an inner product space, then for fixed y ∈ V, the
map: v 7→ (v, x) is a linear functional. The next result states that the converse also
holds.
12
4 ADJOINT OF A LINEAR TRANSFORMATION
The (i, j)-th entry of V [T ∗ ]W is the conjugate of the (j, i)-th entry of W [T ]V .
13
5 UNITARY SIMILARITY
x
3 2 x + ιy
E XERCISE 4.6. Let T : C → C given by T y = . Find
(i + ι)y + 3z
z
1 + ι
T∗ .
1−ι
D EFINITION 4.7. A linear operator on an inner product space V is called self
adjoint if T ∗ = T. Thus T is self adjoint if and only if the matrix of the matrix of
T with respect to orthonormal basis is Hermitian4
5 U NITARY SIMILARITY
Let us assume from here onwards that all matrices have entries in C and Cn
has the standard inner product.
E XERCISE 5.2. Show that the inverse of a unitary matrix is unitary, and the product
of two unitary matrices is unitary. Show by example that the sum of two unitary
matrices is not unitary.
14
5 UNITARY SIMILARITY
whose
firstcolumn is u. Then the first column of P1−1 AP1 is λe1 and so U1∗ AU1 =
λ ∗
. Here * denotes those entries which are of no interest to us.
0 A1
∗
Now by induction hypothesis thereis an unitary matrix V such that V A1 V is
1 0
upper triangular. Now if U2 = , then U = U1 U2 is unitary and U ∗ AU =
0 V
λ ∗
. Hence an upper triangular matrix.
0 V ∗ A1 V
Note that diagonal entries of an upper triangular matrix are eigenvalues. Hence
we have the following statement.
Let A ∈ Cn×n . Then det(A) is the product of diagonal eigenvalues, and the
trace of A is the sum of its eigenvalues.
Proof. (i) kAuk2 = (Au)∗ (Au) = u∗ (A∗ A)u = u∗ (AA∗ )u = (A∗ u)∗ (A∗ u) =
kA∗ uk2 . Hence kAu|| = kA∗ uk.
(ii) A − λIn is also normal. Thus using (i): 0 = k(A − λIn )uk = k(A − λIn )∗ uk =
k(A∗ − λIn )uk. Hence A∗ u = λu.
(iii) Let Au = λu and Av = µv, λ 6= µ. Then λv ∗ u = v ∗ (Au) = (A∗ v)∗ u =
(µv)∗ u = µv ∗ u. Hence v ∗ u = 0
15
6 BILINEAR FORMS
2 1 1
E XAMPLE 5.10. The matrix 1 2 1 is Hermitian, and so unitary diagonalizable.
1 1 2
The
eigenvalues
of this matrix are 1 and 4. Eigenvectorscorresponding
to to 1 are
1 1 1
−1 and 0 and eigenvector corresponding to 4 is 1. Now by Proposition
0 −1 1
5.9, we need to find an orthonormal
1 eigenvectors corresponding to 1. These can be
√1 √1
√
1 1 2 6 3
−1 1 1
−1 and 1 . Hence U = √2 √6 √3 .
0 −2 −2 √1
0 √ 6 3
6 B ILINEAR FORMS
where A is an n × n matrix whose (i, j)-th entry is f (ui , uj ). The matrix A is called
the matrix of bilinear form f with respect to an ordered basis B, denoted by [f ]B .
16
6 BILINEAR FORMS
n
X
f (x, y) = αi βj φ(ui , uj ),
i,j=1
[φ]B = P t [φ]C P,
17
7 SYMMETRIC BILINEAR FORMS
Proof. Define g(x, y) = 21 (f (x, y) + f (y, x)) and h(x, y) = 12 (f (x, y) − f (y, x)).
Then verify that g is symmetric bilinear for on V and h is skew-symmetric bilinear
for on V such that f = g + h.
The triplet (p, q, s) is called the index of f , the number p + q is the rank of f ,
and the number p − q is called the signature of f.
18
7 SYMMETRIC BILINEAR FORMS
1 1 2
E XAMPLE 7.1. Let A = 1 2 1 . We find a matrix Q so that Qt AQ is diagonal.
2 1 3
The method is we start with A and I3 then the column operation that we apply on
A the same row operation is to be applied on A and the corresponding column
operation is applied on the identity matrix. Finally this way when A transformed to
the diagonal form, then the identity matrix transforms to Q.
Write columns of A and I3 .
1 1 2 1 0 0
1 2 1 0 1 0
2 1 3 0 0 1
−1 times the first column added to the second column, do the same on rows
1 0 2 1 −1 0
0 1 −1 0 1 0
2 1 3 0 0 1
−2 times the first column added to the third column, do the same on rows
1 0 0 1 −1 −2
0 1 −1 0 1 0
0 −1 −1 0 0 1
Add second column to the third column, do the same for rows:
1 0 0 1 −1 −3
0 1 0 0 1 1
0 0 −2 0 0 1
1 −1 −3 1 0 0
Hence Q = 0 1 1 and Qt AQ = 0 1 0 .
0 0 1 0 0 −2
19