Professional Documents
Culture Documents
Matrix (matrices)
DEFINITION
Column 1
a11
a
21
Row 1
Row 2
a31
Row 3
a13
a14
a22
a23
a32
a33
a24
a34
a
m1
Row m
a12
am 2
am 3
am 4
1.)
2.)
2 5
1 1
6 7 8
2 X 3
10
3.)
2
X
1
3 X 3
4.) 3 4
1 X 2
1.)
5
1
4
4
0
3 0
2.)
0 3
3 X 2
2 X 2
4.)
1X 2
5
5.)
2X 1
1 2 3
3.) 0 1 8
0 0 1
3 X 3
6.) 3
1X 1
5
A
4
0
1
5 6 0 3
A+ B
4 2 1 3
6 3
B
2 3
1 3
6 4
2 1 3
2.)
1 0 1
0
0
0
0
0
0
2 1 3
1 0 1
When a zero matrix is added to another
matrix of the same dimension, that same
matrix is obtained.
3.) 2
3
11
2 1
3 2
2
0
2 (1)
03
1 3
1
1
2
1
3
3
5
3
4
PRACTICE PROBLEMS:
4 1 6 5
1.)
6 3 7 3
2 6
13 0
1 3 2
2 1 5 1 4 7
2.)
4 0 5
6 4 3 2 4 8
1
A
3
1 5
0
1
A
0 2
1 5
2 1 5
2
6
6 4 3
1 5
4 3
Scalar Multiplication:
1 2 3
k 1 2 3
4 5 6
2k
3k
1k
1k 2k 3k
4k 5k 6k
Examples:
9 0
3 0
1.) 3
12 15
4 5
1 2 x
5 10 5 x
2.) 5 4 y 1 20 5 y
5
2
2
0 5 x
0 25 5 x
an1 x1 an 2 x2 ann xn bn
an1 an 2
Math for CS
Lecture 2
a1n x1 b1
a2 n x2 b2
ann xn bn
16
(1)
A x b
(2)
A A x A b
Due to the definition of A-1:
A A x I x x
xA b
Consistency (Solvability)
A-1 does not exist for every A.
The linear system of equations Ax=b has a
solution, or said to be consistent if
Rank{A}=Rank{A|b}
A system is inconsistent when
Rank{A}<Rank{A|b}
Rank{A} is the maximum number of linearly independent
columns or rows of A. Rank can be found by using ERO
(Elementary Row Oparations) or ECO (Elementary column
operations).
An inconsistent example:
Geometric interpretation
1 2 x1 4
2 4 x 5
1 2 4
0 0 3
Math for CS
Rank{A}=1
19
Uniqueness of solutions
The system has a unique solution if
Rank{A}=Rank{A|b}= n,
where n is the order of the system.
Math for CS
Lecture 2
20
If Rank{A}=n
Det{A} 0 A-1 exists Unique solution
1 2
1 1
Math for CS
x1 4
x 2
2
Lecture 2
21
If Rank{A}=m<n
Det{A} = 0 A is singular so not invertible
infinite number of solutions (n-m free variables)
under-determined system
1 2 x1 4
2 4 x 8
2
Rank{A}=Rank{A|b}=1
Consistent so solvable
Example:
2
2 4
Show x is an eigenvector for A
3 6
1
2 4 2 0
Solution : Ax
3 6 1 0
2
But for 0, x 0
1
0
Thus, x is an eigenvector of A, and 0 is an eigenvalue.
Eigenvalues
or
(A I)x = 0
If we define a new matrix B = A I, then
Bx = 0
If B has an inverse then x = B-10 = 0. But an eigenvector cannot be zero.
Thus, it follows that x will be an eigenvector of A if and only if B does
not have an inverse, or equivalently det(B)=0, or
det(A I) = 0
This is called the characteristic equation of A. Its roots determine the
eigenvalues of A.
1
2
A
15
2
1
0
A
02
Eigenvalues: examples
Example 1: Find the eigenvalues of
2 12
I A
( 2)( 5) 12
1 5
2 3 2 ( 1)( 2)
two eigenvalues: 1, 2
Note: The roots of the characteristic equation can be repeated. That is, 1 = 2 == k.
If that happens, the eigenvalue is said to be of multiplicity k.
Example 2: Find the eigenvalues of
2 1
0
I A 0
2
0 ( 2) 3 0
0
0 =2is2an eigenvector of multiplicity 3.
Eigenvectors
To each distinct eigenvalue of a matrix A there will correspond at least one eigenvector
which can be found by solving the appropriate set of homogenous equations. If i is an
eigenvalue then the corresponding eigenvector xi is the solution of
Example 1 (cont.):
3 12
1 4
1 : (1) I A
1
4
0
0
x1 4 x2 0 x1 4t , x2 t
x1
4
x1 t , t 0
1
x2
4 12
1 3
2 : (2) I A
1
3
0
0
x1
3
x2 s , s 0
1
x2
(A iI)xi = 0
Eigenvectors
Example 2 (cont.): Find the eigenvectors of
2
1
0
A
02
0 1 0 x1 0
( 2 I A)x 0 0 0 x2 0
0 0 0 x3 0
Let x1 s, x3 t. The eigenvectors of = 2 are of the form
x1
x x2
x3
0 s
0 t
LINEAR INDEPENDENCE
Definition: A set of vectors {v1, , vp} in n
is said to be linearly independent if the vector
equation x1v1 x2 v 2 ... x p v p 0
has only the trivial solution. The set {v1, , vp} is
said to be linearly dependent if there exist
weights c1, , cp, not all zero, such that
c1v1 c2 v 2 ... c p v p 0
v1 2 v 2
v3 1
0
1
4
x1 2 x 2 5 x3
3
6
2
0
1 0
0
0
1 4 2
2 5 1
3 6 0
0
0
1 4 2 0
: 0 3 3 0 .
0 0 0 0
1 0 2 0
0 1 1 0
0 0 0 0
x1 2 x3 0
x2 x3 0
x1 2 x3 x2 x3
Thus,
,
, and x3 is free.
00
x3 5
Choose
.
x2 value
5 for x3say,
x1 any
10nonzero
Then
and
.
square
matrix
satisfies
characteristic equation.
Let A = [aij]nn be a square matrix
then,
a
...
a
22
2n
A 21
.... .... .... ....
a n1 a n 2 ... a nn
n n
its
own
a11 -
a
21
a 12 ...
a22 - ...
...
an1
an2
...
| A -I|= 0
...
a 1n
a 2n
...
... ann -
p0 n+ p 1 n-1+ p 2
n-2
+...+ p n= 0
1
[p0 A n-1 + p1A n-2 + p 2 A n-3 +...+ pn-1I]
pn
This result gives the inverse of A in terms of (n1) powers of A and is considered as a practical
method for the computation of the inverse of the
large matrices.
Note 2:- If m is a positive integer such that m > n
then any positive integral power Am of A is linearly
expressible in terms of those of lower degree.
2 1 1
1 2 1
1 1 2
2 1 0
1 2
3 6 2 9 4 0
(on simplification)
2 1 1 2 1
A 2 1 2 1 1 2
1 1 2 1 1
6 5 5
A 3 A 2 A 5 6 5
5 5 6
1
2
2
1
1
6 5 5
5 6 5
5 5 6
1 1 22 22 21
2 1 21 22 21
1 2 21 21 22
A3 -6A2 +9A 4I = 0
22 22 21
6 5 5
= 21 22 21 - 6
5
6
5 5 6
21 21 22
0 0 0
= 0 0 0 0
0 0 0
2 1 1
+ 9
1
2
1 1 2
1 0 0
-4 0 1 0
0 0 1
2 1 1
1 0 0 3 1 1
1 2 1 9 0 1 0 1 3 1
1 1 2
0 0 1 1 1 3
44
Example 2:Given
1
A 0
3
2 1
1 1
1 1
Hamilton theorem.
i.e.,
3 3 2 5 3 0
-1
1 0
1
(on simplification)
45
46
2 5 4 3 6 3 5 0 0
1
From(1), A 1 3 2 2 0 3 3 0 5 0
3
6
4
1
9
3
3
0
0
5
0 1 1
1
3 4
1
3
3 7
1
We know that, A 1
Adj. A
A
Adj. A A 1 A
47
Now,
A 0
1 3
Adj. A ( 3)
3
0
Adj. A 3
3
1
4
7
0
3
3
1
4
7
1
1
1
1
1
1
48
DIAGONALISATION OF A
MATRIX
Diagonalisation of a matrix A is the process of
reduction A to a diagonal form.
If A is related to D by a similarity transformation,
such that D = M-1AM then A is reduced to the
diagonal matrix D through modal matrix M. D is
also called spectral matrix of A.
49
REDUCTION OF A MATRIX TO
DIAGONAL FORM
If a square matrix A of order n has n linearly
independent eigen vectors then a matrix B can
be found such that B-1AB is a diagonal matrix.
Note:- The matrix B which diagonalises A is called
the modal matrix of A and is obtained by
grouping the eigen vectors of A into a square
matrix.
50
51
1 1 2
0 2 1
0 0 3
to diagonal form by
2
1 0
3 -
=>
= 1, 2, 3
Hence eigen values of A are 1, 2, 3.
52
Corresponding to = 1, let X1 =
vector then
x1
x
2
be the eigen
x 3
(A I ) X1 0
0 1 2
0 1 1
0 0 2
x1
0
x 0
2
x 3
0
x 2 2x 3 0
x2 x3 0
2x 3 0
x1 k1, x 2 0 x 3
1
X1 k1 0
0
53
Corresponding to = 2, let X2 =
vector then,
x1
x
2
x 3
be the eigen
(A 2I ) X 2 0
- 1 1 2 x1
0
0 0 1 x 0
2
0 0
0
1 x 3
x1 x 2 2x 3 0
x3
x3 0
x1 k 2 , x 2 -k 2 , x 3 0
1
X 2 k 2 - 1
0
54
Corresponding to = 3, let X3 =
vector then,
x1
x
2
x 3
be the eigen
(A 3I ) X3 0
- 2 1 2 x1
0
0 - 1 1 x 0
2
0 0 0 x 3
0
2 x1 x 2 2x 3 0
x2 x 3 0
x 2 k 3 , x 3 - k 3 , x1
3
k3
2
3
X3 k 3 - 2
2
55
1
-1
0
3
2
2
M 2
-2
Adj. M 0
0
2
2
- 1
M1
1
Adj. M
M
0
1
-1
0
- 1
2
1
1
56
1 1
M1AM 0 - 1
0 0
1
0
0
1
2
1
1
1 1 2 1 1 3
0 2 1 0 - 1 2
0 0 3 0 0 2
0 0
2 0
0 3
Now, since
D = M-1AM
=>
A = MDM-1
A2 = (MDM-1) (MDM-1)
= MD2M-1
[since M-1M =
I]
57
Similarly,
A3 = MD3M-1
1 1
0 -1
= 0 0
1 -7
0 8
3
A =
0 0
1
1
1
3 1 0 0
2
2 0 8 0 0 - 1 1
1
2 0 0 27 0 0
32
- 19
27
58
ORTHOGONAL TRANSFORMATION
OF A SYMMETRIC MATRIX TO
DIAGONAL FORM
A square matrix A with real elements is said to
be orthogonal if AA = I = AA.
But AA-1 = I = A-1A, it follows that A is orthogonal if
A = A-1.
Diagonalisation by orthogonal transformation is
possible only for a real symmetric matrix.
59
60
61
2
0
4
0
6
0
4
0
2
by means of an
orthogonal transformation.
Solution:Characteristic equation of A is
2
0
4
0
6
0 0
4
0
2
(2 )(6 )(2 ) 16(6 ) 0
2, 6, 6
62
x1
when = -2,let X 1 = x 2 be the eigen vector
x 3
then
(A + 2I)X1 = 0
4
0
4 x1
0
8 0 x 2 = 0
4 0 4 x 3
0
4x1 + 4x 3 = 0 ...(1)
0
8x 2 = 0 ...(2)
4x1 + 4x 3 = 0
...(3)
x1 = k1, x 2 = 0, x 3 = -k 1
1
X1 = k1 0
-1
63
x1
when = 6,let X 2 = x 2 be the eigen vector
x 3
then
(A - 6I)X 2 = 0
-4 0
0 0
4 x1
0 x 2
4 0 -4 x 3
4x1 + 4x 3 = 0
= 0
4x1 - 4x 3 = 0
x1 = x 3 and x 2 isarbitrary
x 2 must be so chosen that X 2 and X3 are orthogonal among themselves
and also each is orthogonal with X1.
64
Since X3 is orthogonal to X1
- =0
...(4)
X3 is orthogonal to X 2
+ =0
...(5)
0
1
0
65
1
1
0
2
2
N = 0
0
1
1
1
0
2
2
1
1
0
1
1
2
2
0
2 0 4 2
2
1
1
D = N'AN =
0
0 6 0 0
0
1
2
2
4 0 2 1
1
0
1
0
0
2
2
-2 0 0
D = 0 6 0
which is the required diagonal matrix.
0 0 6
66