You are on page 1of 22

CHAPTER 3

Solving Systems of Equations


3.1 Systems of Linear Equations
Consider a system of

linear algebraic equation in

unknowns.
( )

(
where
values and (
system can be written as

Where

vector and

) are the known coefficients,


(
) are the known
) are the unknowns to be determined. In matrix form the above

] is coefficient matrix,

which is a column

] is the unknown column vector.

The method of solution of the linear system is classified in to two parts.


1. Direct Methods: Direct methods give the exact solution after a finite number of steps. We
assume here that there are no round off errors.
2. Indirect or Iterative Methods: gives a sequence of approximate solutions, which converges
when the number of steps tends to infinity.

3.1.1 Direct Methods


The system of equation
can be directly solved in the following cases.
1)
where is a diagonal matrix. Equation (*) becomes

The solution is given by


1)

where is lower triangular matrix. Equation (*) can be written as

Since the unknowns are solved by forward substitution, this method is called forward
substitution method.
2)
where is upper triangular matrix. Equation (*) can be written as

By: Teshome B.

The unknowns are solved by back substation and this method is called back substation method.

3.1.1.1 Cramers Rule


Cramers Rule is the analytical method for solving the given system of equation
and the solution takes the form:
| |
where
is the matrix obtained by replacing the
column of
Example: Solve the following system using Cramers Rule

by the vector .

Solution: Here
[
[

| |

]
], |

By Cramers rule, we get


| |
| |

], |

], |

[
| |
| |

| |
| |

3.1.2 Matrix Inversion Method


Consider the linear system in matrix form
If the matrix A is non-singular, that is, if
( ) is not equal to zero, then
both sides by
, we get the solution of the linear system

exists. Multiply

Example: Obtain the solution of the linear system by the matrix inversion method.

Solution:

][ ]

[ ]

or
By: Teshome B.

3.1.3 Gaussian Elimination Method


Here, the unknowns are eliminated by combining equations such that the equations in
unknowns are reduced to an equivalent upper triangular system which is the solved by back
substitution method.
Consider
linear system
[

][ ]

[ ]

In the first stage of elimination, multiply the first row by

and

and subtract from the second

and third raw respectively. We get


[

( )

( )

( )

( )

][ ]

( )

( )

In the second stage of elimination, multiply the second row by (

( )
( )

) and subtract from third

row, we get
( )

( )
( )

][ ]

( )

( )

This system is upper triangular system and can be solved by back substitution method.
Therefore, the Gaussian elimination method gives
,
Example: Solve the linear system

by using Gaussian Elimination.


Solution: The augmented matrix is
,

1st Step: Multiply the first row by and


and subtract from row two and three respectively,
we get
2 1 : 0
1
1 2 1 : 0
2

2 3 : 3 0 2 1 : 3

1 3 0 : 2
0 1 1 : 2
2nd Step: Multiply the second row by

By: Teshome B.

and subtract from row three, we get

1 2 1 : 0
1 2 1 : 0
0 2 1 : 3 0 2 1 : 3

1
1
0 1 1 : 2
:
0 0

2
2

Writing this in linear system form, we have


1 2
1 x1 0


0 2 1 x 2 = 3
0 0 1 x 1
2 3 2

Exercise: Solve the system of equation by using Gaussian elimination method


a.
b.
3

3.1.4. GAUSS-JORDAN METHOD


Gauss-Jordan method is an extension of the Gauss elimination method. The set of equations
is
reduced to a diagonal set
, where Iis a unit matrix. This is equivalent to
. The solution
vector is therefore obtained directly from b'. The Gauss-Jordan method implements the same series of
operations as implemented by Gauss elimination process. The main difference is that it applies these
operations below as well as above the diagonal such that all off-diagonal elements of the matrix are
reduced to zero. Gauss-Jordan method also provides the inverse of the coefficient matrix A along with the
solution vector {x}. The Gauss-Jordan method is highly used due to its stability and direct procedure. The
Gauss-Jordan method requires more computational effort than Gauss elimination process.
Gauss-Jordan method is a modification of Gauss elimination method. The series of operations performed
are quite similar to the Gauss elimination method. In the Gauss elimination method, an upper triangular
matrix is derived while in the Gauss-Jordan method an identity matrix is derived. Hence, back
substitutions are not required.
Example

Solve the following equations by Gauss-Jordan method.


x+ 3y+ 2z =17
x+ 2y+ 3z =16
2x y+ 4z =13

By: Teshome B.

Example 2
Solve the following system of equations using the Gauss-Jordan method.
x 2y=4
5y+ z =9
4x 3z =10
Solution:

Hence, the last matrix above represents the system with x= 2, y= 3 and z= 6.
By: Teshome B.

3.1.5. LU DECOMPOSITION METHOD


This method is also known as triagularization or factorization method. In this method, the
coefficient matrix of the system of equation is decomposed or factored in to the product of a
lower triangular matrix and upper triangular matrix . We write the matrix
as

Where

and

[
]
[
]
By multiplying matrices and and comparing with the coefficient matrix we get
unknowns and equations. To produce a unique solution it is convenient to choose either
or
when we choose
i.
, then the method is called Doolittles Method.
ii.
, then the method is called Crouts Method.
Having the determined matrix and , then the above system of equation becomes
We can write this system of equations as two systems of equations
(i)
(ii)
The unknowns
in (ii) are solved by forward substitution method and the unknowns
in (i) are obtained by backward substitution method. Alternatively, we can find
and
to get
and
.
The inverse can be determined from
.
Example 1. Solve the following equations by LU decomposition method.

Solution: by using Doolittles method we write

and

Multiplying the two matrices In the left hand side and equation with corresponding entries in the
right hand side, we get

and the given system can be written as:


By: Teshome B.

We can write this system of equations as two systems of equations.


[

][ ]

][ ]

[[ ]]

(i)

(ii)

Using (ii) and by forward substitution method, we get


[ ]

From (i) we have

By backward substitution method, we get


[ ]

[[ ]]

Example2: Solve the following set of equations by using the Crouts method:

Solution: From the system we have

Let

and by Crouts method

Equating with corresponding coefficients in both sides, we get


By: Teshome B.

The given system can be written as

We can write this system as two system of equations

From the first system by forward substitution method, we get

From the second system by backward substitution method, we get


[ ]

[ ]

Exercise
Solve the system of linear equations using LU decomposition method.
c.
a.

b.

3.1.6 Cholesky Method


This method is also known as the square root method. If the coefficient matrix is symmetric,
then the coefficient matrix can be decomposed as
where is a lower triangular
matrix. Alternatively, may be decomposed as
where
is an upper triangular.
Choleskys decomposition method is faster than the LU decomposition. There is no need for
pivoting. If the decomposition fails, the matrix is not positive definite.
The linear system
becomes

which may be written as


By: Teshome B.

(I)
(II)
The solution
from (I) are obtained by forward substitution method and the
solutions
from (II) are determined by back ward substitution method.
Example 1: Solve the system of equations
[

][

using Cholesky method.


Solution: Since
then we can use Cholesky method.
We write

][

Multiplying the right hand side and comparing the corresponding entries in both sides, we get
[

We write the system of equation as


or

,
, we obtain [

From

][ ]

or
From

[ ]

][

, we obtain
[
[

Exercise
1. Solve the system of equations by Cholesky method.
[

][

[ ]

2. Find the inverse of the matrix by using Cholesky method.


a. [

b. [

3.2. Indirect (iterative) Method


Convergence condition for iterative methods
The iterative method of the linear system
converges if the coefficient square matrix is
diagonally dominant. The square matrix is diagonally dominant if

By: Teshome B.

3.2.1 Gauss Jacobi Iteration Method


Let us consider the system of equations.

Let the coefficient matrix be diagonally dominant and solve for


equation one, two and three respectively, we get
(

and

Let ( ) ( ) ( ) be the initial approximations of the unknowns


first approximations are given by
( )

( )

( )

( )

( )

( )

( )

( )

( )

from equation

and

. Then, the

Similarly, the second approximations are given by


( )

( )

( )
( )

(
(
(

)
)
)

( )

( )

( )

( )

( )

Proceeding in the same way, if

( )

( )

( )

are the nth iterates then

(
(
(

( )

( )

( )

( )

( )

)
)

( )

The process is continued till convergency is secured.


Note: In the absence of any better estimates, the initial approximations are taken as zero vectors. That
( )
( )
is, ( )
.

By: Teshome B.

10

Example1: Solve the following system of equation using Jacobis method

Start with the solution (


).
Solution: Since the coefficient matrix is diagonally dominant then the method converges. The
system of equation can be written in the following form, if we assume,
as initial
approximation:
(
(
(
Now if
First approximation:

)
)
)

, then

Second approximation:

Third approximation:

Fourth approximation:
By: Teshome B.

11

Fifth approximation:

Hence, approximating solution after having some other approximations is (up to 3 decimal
places)

Exercise: Use the Jacobi iterative scheme to obtain the solutions of the system of equations
correct to three decimal places.

3.2.2 Gauss-Seidal Method


Let us consider the system of equations.

Let the coefficient matrix be diagonally dominant and solve for


equation one, two and three respectively, we get

and

Let ( ) ( ) ( ) be the initial approximations of the unknowns


first approximations are given by

By: Teshome B.

from equation

and

. Then, the

12

( )

( )

( )

( )

( )

( )

( )

( )

( )

Similarly, the second approximations are given by


( )

( )

( )
( )

( )

( )

( )

( )

Proceeding in the same way, if

( )

( )

( )

( )
(

are the nth iterates then

( )

( )

( )

)
)

The above iteration process is continued until the values of


and
assigned or desired degree of accuracy.
Example 1: Solve the following equations by Gauss-Seidal method.

are obtained to a pre

Solution: In the above equations:


| |
| |
| |
| |
| |
| |
| |
| |
| |
So, the conditions of convergence are satisfied and we can apply Gauss-Seidal method. Then we
rewrite the given equations as follows:

Let the initial approximations be


Iteration 1:

( )

By: Teshome B.

(
( )

)
)

( )
(

( )

( )

( )
(

( )

( )

( )

( ))
( ))

13

( )

( )

Iteration 2:
( )

( )

(
(

( )

))

))

Iteration 3:
( )

( )

(
(

( )

))

))

Iteration 4:
( )

( )

(
(

( )

))

))
)

Iteration 5:
( )

( )
( )

))
(

(
(

))

Iteration 6:
( )

( )
( )

(
(

(
(

))

))
)

The roots are


Exercise:
1. Using the Gauss-Seidal method solve the system of equations correct to three decimal
places.

2. Solve by Gauss-Seidal iteration method the system of equations correct to four


decimal places
;
.
By: Teshome B.

14

3.3 Eigenvalue Problem

Denition 1: Let
( ) be any square matrix of order . If there exists a non zero column
vector and a scalar such that
then is called an eigenvalue of the matrix and
is called the eigenvector of the corresponding to the eigenvalue .
Let be an eigen value of and be the corresponding eigen vector. Then, by denition,
, where is the unit matrix of order . It follows that (
)
which
is a system of homogeneous linear equations in unknown
. Since
,
- is
to be non-zero vector and to find the non zero solution, then |
|
. If we evaluate this
determinant we get a polynomial of degree which is as a function of The polynomial we get
is said to be characteristics equation or polynomial and the roots of the polynomial are called
the eigenvalues.
Example 1: Find the eigenvalue and eigenvectors of

Solution: Let be an eigenvalue of . It follows that

Then we have (
That is,
eigenvalues of

)*

From this the characteristics equation is ( )

. The

are

CASE I. FOR

Here we have three unknown with two equations. Solve this problem by the rule of crossmultiplication, we have

That is,

Hence, eigenvector corresponding to

is [

Case II. For


By: Teshome B.

15

The eigenvector is given by

By the rule of cross-multiplication, we have

is [

Therefore, the eigenvector corresponding to

Case III. For


is [ ].

Therefore, the eigenvector corresponding to

Exercise
Find the eigenvalues and eigenvectors of the matrix
[

a.

b.

3.3.1 Power Method


Definition: Let
be the eigenvalues of an
matrix .
is called the dominant
| |
eigenvalue of if | |
The eigenvector corresponding to
is called
dominant eigenvectors of .
Power method is used to determine the largest eigenvalue in magnitude and the corresponding
eigenvector of the system
. Let be a non zero initial vector and we need to find

Where
is the largest element in magnitude of
gives the dominant eigenvector in magnitude which is

. It may be note that as


,
and
is the required eigenvector.

Note: If the initial vector is not given, then we take the unit vector as the initial vector.

Example 1: Find the dominant eigenvalue and eigenvector of the matrix


[
Solution: Using power method and taking
[

][ ]

][

][

][

By: Teshome B.

[
]

]
[

]
[ ]
[

[
[

]
]

[
[

]
]
16

][

][

][

][

][

]
[

This shows that the largest eigenvalue in magnitude is


[

]
[

and its corresponding eigenvector is

Example1: Perform seven iteration of power method find the dominant eigenvalue and
corresponding eigenvector of

Use as the initial approximation.


(
).
Solution: One iteration of the power method produces
[

][ ]

[ ]

] and take

A second iteration yields


[

][

] and

Continuing this process, we obtain the sequence of approximations shown in Table

The dominant eigenvalue is

and corresponding eigenvector is

].

Exercise: Determine the numerically largest eigen value and the corresponding eigen
vector of the following matrix, using the power method correct to two decimal places.

By: Teshome B.

17

a.

b.

c.

d.

( )

[ ]

3.3.2 Inverse Power Method


This method is used to find the smallest eigenvaluein magnitude and its corresponding
eigenvector of the system
, where is an eigenvalue of the matrix and x be a non
zero vector. Multiplying both sides by
, we get

This shows that is an eigenvalue of


power method for

be its corresponding eigenvector. Applying

and

we get the largest eigenvalue of


in magnitude which is the smallest eigenvalue of .
Example1: Find the smallest eigenvalue in magnitude of the matrix
[

Correct to two decimal places by using inverse power method.


Solution: Inverse of

is

] anditial vector take the


[

][ ]

][

][

][

][

[ ]

Hence, the largest eigenvalue of

which is correct to two decimal place is

smallest eigenvalue is

and its corresponding eigenvector is [

By: Teshome B.

and the
]

18

Exercise
Find smallest eigenvalue correct to two decimal places of the matrix
[

] and obtain the corresponding eigenvector.

3.4 System of Non Linear Equations

We now extend the methods derived for the single equation ( )


to a system of non linear
equations. We first consider a system of two non linear equations in two unknowns as
(
)
(
)

3.4.1 General Iteration Method


Consider the solution of the following system of equations
(
)
(
)
We may write this system in an equivalent form as
(
)
(
)
Let ( ) be a solution of this system. Therefore, ( ) satisfies the equations
( )
( )
) be a suitable approximation to ( ). Then, we write a general iteration method for
Let (
the solution as
(
)
(
)
If the method converges, then
and
.

Convergence condition for General Iteration Method

A sufficient condition for convergence is that for each ,


where is the norm and
(
)
(
)
[
] which is called Jacobian matrix.
(
)
(
)
The norm of the Jacobian matrix is obtained by using the maximum absolute row sum norm, we
get the conditions
| (
)| | (
)|
| (
)| | (
)|
( )
The necessary and sufficient condition for convergence is that for each
where
( ) is the spectral radius of the matrix .
Note: Spectral radius is the largest eigenvalue in magnitude.
The system can be generalized to a system of equations in to unknowns.
Example: The system of equations
(
)
(
)
) and (
) so that the sequence
has a solution ( ). Determine the iteration functions (
of iterates obtained from
(
)
(
)
(
) (
) converges to the root. Perform five iterations.
By: Teshome B.

19

Solution: We write the given system of equations in an equivalent form as


(
)
(
)
(
)
(
)
where and are arbitrary parameters to be determined. If we use the maximum absolute row
sum norm, we require that
| (
)| | (
)|
| (
)| | (
)|
) (
), we get
Evaluating at (
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
Therefore, the condition of convergence becomes
|
| | |
| | |
|
Any values of and which satisfy the condition can be used. Obviously, both and are
negatives. Taking
and
, we obtain the iteration method
(

(
Starting with (
(
) (
(
) (
(
) (

(
)

(
(

), we get
) (
) (
)
)

)
)
)

(
(

)
)

3.4.2 Newton Raphson Method


Consider following two non linear systems

(
)
(
)
) be a suitable approximation to the root ( ). Let
Let (
and
be the increment in
) is the exact solution. That is
and
respectively. So that (
(
)
(
)
), we get
Expanding in Taylors series about (
(

] (

] (

Neglecting second and higher powers of


and , we obtain
(
)
(
)
(
)
(
)
(
)
(
)
Writing this in matrix form, we get
(
)
(
)
(
[
][ ]
[
(
)
(
)
(
(
)
or
By: Teshome B.

)
]
)

20

(
)
(
(
)
(
The solution of the system is
where

)
]
)

(
(

(
(
(
Therefore, we can write

)
)

[[

Where

(
(

)
]
)

)
]]
)

(
)
]
(
)
(
)
and [
] [ ] [ ] [ ]
[
]
(
)
This method can be easily generalized for solving a system of equations in to
(
)
(
)
[

(
If [ ( ) ( )
the method as

( )

unknowns

] is an initial approximation to the solution vector, then we can write


(

( )

( )

where
[

]
( )

( )

( )

( )

( )

,
- .
[
] and
Example1: Solve the system by Newton Raphson correct to two decimal places
x12 x 22 x32 1 0

1
4
2
2
x1 x 2 4 x3 0
x12

Take the initial approximation X


Solution: We have

(0)

x32

1
1
1

f1 x1 , x2 , x3 x12 x22 x32 1


f 2 x1 , x2 , x3 x12

x32

f 3 x1 , x2 , x3 x12 x22 4 x3

By: Teshome B.

1
4

21

f1 f1 f1

x1 x2 x3 2 x1
f
f 2 f 2
J(X ) 2
2x
x1 x2 x3 1
f
2 x1
3 f 3 f 3
x1 x2 x3
The Newton Raphson formula is
(

2
2 x3
2
2 2

(0)
(0)

2
2 x3 , F ( X ) 1.75 , J 0 J ( X ) 2 0
2
2 2 4
4

2 x2
0
2 x2

( )

( )

X (1) X (0) J ( X ( 0) ) F X (0)


X (1)

1 2 2 2
1 2 0 2
1 2 2 4

0.50348
2 0.79167
1.75 0.87500 , F ( X (1) ) 0.48785

0.05905
2 0.33333

X ( 2) X (1) J ( X (1) ) F X (1)

1.58334 1.75000 0.66666


0 0.66666
, J 1 J ( X (1) ) 1.58334
1.58334 1.75000
4
1

0.79167 1.58334 1.75000 0.66666 0.50348 0.52365


( 2)
X 0.87500 1.58334
0 0.66666 0.48785 0.86607
0.33333 1.58334 1.75000
4 0.05905 0.23810
0.44081
0.44733

( 3)
( 4)
X 0.86603 , X 0.86603
0.23607
0.23607

Exercise
Solve the system by Newton Raphson correct to two decimal places
a.
Taking the initial approximation ,

- .

b.

c.

Take (
(

By: Teshome B.

)
. Take (

22

You might also like