Professional Documents
Culture Documents
Unit I (Matrices)
1. The Characteristic equation of matrix A is
a) 2 S1 S2 0 if A is 2 X 2 matrix
b) 3 S1 2 S 2 S 3 0 if A is 3 X 3 matrix
3. Property of eigenvalues:
Let A be any matrix then
a) Sum of the eigenvalues = Sum of the main diagonal.
b) Product of the eigenvalues = A
1 1
coeff ( x 12 ) coeff ( x1 x2 ) coeff ( x1 x3 )
2 2
1 1
5. Matrix of Q.F coeff ( x2 x1 ) coeff ( x 22 ) coeff ( x2 x3 )
2 2
1 1
coeff ( x3 x1 ) coeff ( x3 x2 ) coeff ( x 32 )
2 2
6. Index = p = Number of positive eigenvalues
Rank = r = Number of non-zero rows
Signature = s = 2p-r
7. Diagonalisation of a matrix by orthogonal transformation (or) orthogonal
reduction:
Working Rules:
Let A be any square matrix of order n.
Step:4 Form a normalized model matrix N, such that the eigenvectors are orthogonal.
Step:5 Find N T .
Step:6 Calculate D=N T AN .
Note:
We can apply orthogonal transformation for symmetric matrix only.
If any two eigenvalues are equal then we must use a, b, c method for third eigenvector.
4. Equation of circle:
The curve of intersection of a sphere by a plane is a circle. So, a circle can
be represented by two equations, one being the equation of a sphere and the
other that of a plane. Thus, the equation x 2 y 2 z 2 2ux 2vy 2wz d 0,
x my nz p taken together represent a circle.
5. Tangent plane:
Equation of tangent plane of sphere at the point x1 , y1 , z1 is
xx1 yy1 zz1 u x x1 v y y1 w z z1 d 0 .
6. Condition for the plane x my nz p to be a tangent plane to the sphere
u mv nw p m 2 n2 u2 v 2 w 2 d .
2 2
y2
3. Radius of curvature if y1 ,
1 x 2
1
2
, where x1
dx
x2 dy
f
3
2
f 2 2
x y
4. Radius of curvature in implicit form
f xx f 2 f xy f x f y f yy f x2
y
2
xy xy
6. Centre of curvature is x , y .
7. Circle of curvature is x x y y 2 .
2 2
where x x
y1 1 y12 , y y
1 y 2
1
y2 y2
8. Evolute: The locus of centre of curvature of the given curve is called evolute of
the curve. x x
y1 1 y12 , y y
1 y 2
1
y2 y2
B 2 4 AC 0 .
10. Evolute as the envelope of normals.
Equations Normal equations
y 2 4ax y xt at 3 2at
x 2 4ay x yt at 3 2at
x2 y2 ax by
1 a 2 b2
a 2 b2 cos sin
x2 y2 ax by
1 a 2 b2
a 2 b2 sec tan
xy c 2 c
y xt 2 ct 3
t
f f
(i) x y nf (first order)
x y
2 f 2 f 2 f
2
(ii) x 2
2 xy y n n 1 f (second order)
x 2 xy y 2
du u dx u dy u dz
2. If u f ( x , y , z ) , x g1 (t ), y g2 (t ), z g3 (t ) then
dt x dt y dt z dt
3. If u f ( x, y ), x g1 (r , ), y g2 (r , ) then
u u x u y u u x u y
(i) (ii)
r x r y r x y
then find the value of x,y,z. Next we can discuss about the Max. and Min.
6. Jacobian:
u u
u , v ( u, v ) x y
Jacobian of two dimensions: J
x , y ( x , y ) v v
x y
( u, v )
7. The functions u and v are called functionally dependent if 0.
( x, y)
( u, v ) ( x , y )
8. 1
( x , y ) ( u, v )
9. Taylors Expansion:
f ( x , y ) f (a , b)
1
1!
hf x (a , b) kf y (a , b) 1 2
2!
h f xx (a , b) 2hkf xy (a , b) k 2 f yy (a , b)
1 3
3!
h f xxx (a , b ) 3h2 kf xxy (a , b ) 3hk 2 f xyy (a , b ) k 3 f yyy (a , b ) ...
where h x a and k y b
x r cos
To change the polar coordinate y r sin
dxdy rdrd
4. Volume dxdydz (or) dzdydx
V V
GENERAL:
dx x dx
1. sin 1 (or) sin 1 x
a x2
a
2
1 x 2
2. a x
dx
2 2
log x a 2 x 2 (or)
dx
1 x 2
log x 1 x 2
dx 1 x dx
3. a tan 1 (or) tan 1 x
2
x 2
a a 1 x2
x 2 a2 x
4. a 2 x 2 dx
2
a x 2 sin 1
2 a
/2 /2
n1 n 3 2
sin x dx cos x dx if n is odd and n 3
n n
5. . ... .1
0 0
n n2 3
/2 /2
n1 n 3 1
6.
0
sin n x dx cos n x dx
0
. ... .
n n2 2 2
if n is even
(Scan the above Q.R code for the direct download of this material)
Complementary functions:
Particular Integral:
Type-I
If f ( x ) 0
then P . I 0
Type-II
If f ( x ) e ax
1 ax
P .I e
( D)
Type-III
1 cos 2 x 1 cos 2 x
Use the following formulas Sin 2 x , cos 2 x ,
2 2
3 1 3 1
sin 3 x sin x sin 3 x , cos 3 x cos x cos 3 x and separate P . I1 & P . I 2
4 4 4 4
Case: iii If f ( x ) sin A cos B ( or ) cos A sin B ( or ) cos A cos B ( or ) sin A sin B
Use the following formulas:
1
( i ) s in A cos B sin( A B ) sin( A B )
2
1
(ii) cos A sin B Sin( A B ) sin( A B )
2
1
( iii ) cos A cos B cos( A B ) cos( A B )
2
1
( iv ) sin A sin B cos( A B ) cos( A B )
2
Type-IV
If f ( x) x m
1
P.I xm
( D)
1
xm
1 g ( D)
1 g ( D) x m
1
i) 1 x 1 x x 2 x3 ...
1
ii) 1 x 1 x x 2 x3 ...
1
iii) 1 x 1 2 x 3x 2 4 x3 ...
2
iv) 1 x 1 2 x 3x 2 4 x3 ...
2
v) (1 x)3 1 3x 6 x2 10 x3 ...
vi) (1 x)3 1 3x 6 x 2 10 x3 ...
Type-V
If f ( x) eaxV where V sin ax,cos ax, xm
1 ax
P.I e V
( D)
1
eax V
( D a)
Type-VI
If f ( x) x nV where V sin ax, cos ax
1
P.I f ( x) e ax e ax f ( x)dx
Da
d2y dy
The equation is of the form x 2 2
x y f ( x)
dx dx
Implies that ( x2 D2 xD 1) y f ( x)
To convert the variable coefficients into the constant coefficients
Put z log x implies x e z
xD D
d d
x 2 D 2 D( D 1) where D and D
dx dz
x 3 D 3 D( D 1)( D 2)
(ax b) D aD
d d
(ax b) 2 D 2 a 2 D( D 1) where D and D
dx dz
(ax b)3 D 3 a 3 D( D 1)( D 2)
d2 y dy
The equation is of the form a 2
b cy f ( x )
dx dx
C.F Ay1 By2 and
2. Gradient of i / x j / y k / z
3. Divergence of F F
i j k
4. Curl of F XF / x / y / z
F1 F2 F3
a
9. Directional derivative of in the direction of a
a
n1 n2
10. Angle between two normals to the surface cos
n1 n2
Where n1 1 at ( x , y , z ) & n2 2 at ( x , y
1 1 1 2 2 , z2 )
11. Unit Normal vector, n
v u
by the curve C, then udx vdy dxdy .
C R
x y
Let F be the vector point function, around a simple closed curve C and over the
then F nds
S
Fdv
V
u 1 v v 1 u
2. Polar form of Cauchy-Riemann Equations: &
r r r r
2u 2u
3. Condition for Harmonic function: 0
x 2 y 2
4. If the function is harmonic then it should be either real or imaginary part of a
analytic function.
5. Milne Thomson method: (To find the analytic function f(z))
i) If u is given f ( z ) u x ( z, 0) iu y ( z, 0) dz
We have (u v ) i (u v ) (1 i ) f ( z )
then F ( z ) U iV where U u v, V u v & F ( z ) (1 i ) f ( z )
Here we can apply Milne Thomson method for F(z).
7. Bilinear transformation:
w w1 w2 w3 z z1 z2 z3
w1 w2 w3 w z1 z2 z3 z
Unit IV (Complex Integration)
1. Cauchys Integral Theorem:
If f(z) is analytic and f ( z ) is continuous inside and on a simple closed curve C,
then f ( z)dz 0 .
c
f ( z)
C, then z a dz 2 if (a)
C
2! f ( z) n! f ( z)
Similarly f (a )
2 i C z a 3
dz , In general f ( n ) (a)
2 i C z a n 1
dz
5. Critical point:
The point, at which the mapping w = f(z) is not conformal, (i.e) f ( z ) 0 is called
a critical point of the mapping.
6. Fixed points (or) Invariant points:
az b
The fixed points of the transformation w is obtained by putting w = z in
cz d
the above transformation, the point z = a is called fixed point.
7. Re s{ f ( z )} Lt ( z a) f ( z ) (Simple pole)
z a
8. Re s{ f ( z )}
1 d m1
Lt m1
(m 1)! z a dz
z a m
f ( z) (Multi Pole (or) Pole of order m)
P( z )
9. Re s{ f ( z )} Lt
z a Q( z )
( z a) ( z a) 2 ( z a)3 ( z a) n ( n )
f ( z ) f (a)
f ( a)
f ( a)
f (a) ... f (a) ...
1! 2! 3! n!
Maclaurins Series:
Taking a = 0, Taylors series reduce to
z z2 z3
f ( z ) f (0) f (0) f (0) f (0) ...
1! 2! 3!
bn
11. Laurents Series: f ( z ) an ( z a ) n
n 1 ( z a )
n
n 0
1 f ( z) 1 f ( z)
where an
2 i C1 ( z a ) n 1
dz & bn
2 i C2 ( z a )1 n
dz , the integrals being
taken anticlockwise.
12. Isolated Singularity:
A point z z0 is said to be isolated singularity of f ( z ) if f ( z ) is not analytic at
1
Example: f ( z) . This function is analytic everywhere except at z 0 .
z
z 0 is an isolated singularity.
13. Removable Singularity:
lim f ( z)
A singular point z z0 is called a removable singularity of f ( z ) if
z z0
exists finitely.
sin z
lim f ( z ) lim 1
Example: z (finite) z 0 is a removable
z0 z0
singularity.
14. Essential Singularity:
If the principal part contains an infinite number of non zero terms, then z z0 is
1 / z 1 / z
1 2
Example: f ( z) e 1
z
... has z 0 as an essential
1! 2!
singularity.
CONTOUR INTEGRATION:
15. Type: I
2
The integrals of the form 0
f (cos ,sin ) d Here we shall choose the contour
z2 1
as the unit circle C : z 1 or z ei ,0 2 . On this type cos ,
2z
z 2 1 1
sin and d dz .
2iz iz
16. Type: II
P( x)
Improper integrals of the form Q( x) dx , where P(x) and Q(x) are polynomials
in x such that the degree of Q exceeds that of P at least by two and Q(x) does not
R
vanish for any x. Here
C
f ( z )dz
R
f ( x)dx f ( z )dz as R
f ( z )dz 0 .
f ( x ) 0 as x .
2.
Sl.No
L 1
1
1.
s
n ! ( n 1)
2. L t n
s n 1 s n 1
1
3. L e at
sa
1
4. L e at
sa
L sin at
a
5.
s a2
2
L cos at
s
6.
s a2
2
L sinh at
a
7.
s a2
2
L cosh at
s
8.
s a2
2
3. Linear Property: L af (t ) bg (t ) aL f (t ) bL g (t )
i) L eat f (t ) F ( s) s s a
ii) L e at f (t ) F ( s ) s s a
6. Change of scale:
1 s
If L f (t ) F ( s) , then L f (at ) F
a a
7. Transform of derivative:
d2
If L f (t ) F ( s) , then L tf (t )
d
F ( s ) , L t 2 f (t ) 2 F ( s) ,
ds ds
dn
In general L t n f (t ) (1) n n F ( s)
ds
8. Transform of Integral;
1
If L f (t ) F ( s) , then L f (t ) F ( s )ds
t s
1 1
5. L1 2 sin at
s a
2
a
s
6. L1 2
s a
2 cosh at
1 1
7. L1 2 sinh at
s a
2
a
1 t n 1
8. L1 n
s (n 1)!
L1 F (s)G(s) L1 F (s) L1 G (s )
16. Solving ODE for second order differential equations using Laplace Transform
i) L y(t ) sL y(t ) y(0)
ii) L y(t ) s 2 L y (t ) sy (0) y(0)
t
y (t ) dt L y (t )
1
17. Solving integral equation: L
0 s
18. Inverse Laplace Transform by Contour Integral method
1
L1 F ( s) F ( s)e st ds
2 i c
1 e sT
0
e st f (t )dt
(Scan the above Q.R code for the direct download of this material)
If the function is neither odd nor even then you should find
1
a0 , an and bn by using the following formulas a0 f ( x )dx ,
1 1
an
f ( x )cos nxdx , bn
f ( x )sin nxdx .
Change of interval:
n x n x
2 2 2
1 1 1
Where a0
0
f ( x )dx , an
0
f ( x )cos dx , bn
0
f ( x )sin dx
2 2 n x 2 n x
Where a0 f ( x )dx , a
0
n f ( x )cos
0
dx , bn f ( x )sin
0
dx
In this interval, you have to verify the function is either odd function or even
function. If it is even function then find only a0 and an ( bn 0 ). If it is odd
function then find only bn ( a0 an 0 ).
2 n x
Where bn f ( x )sin
0
dx
15) The Parsevals Identity for half range sine series in the interval (0,):
2
[ f ( x )]2
dx bn2
n 1
0
a0
f ( x) an cos nx bn sin nx
2 n 1 n 1
where
2 2 2 2
a0 y , a1 y cos x , a2 y cos 2 x , a3 y cos 3 x , ...
n n n n
2 2 2
b1
n
y sin x , b 2 y sin 2 x , b 3 y sin 3 x , ...
n n
2 x
When the values of x is given as numbers the is calculated by .
T
Where T is period, n is the number of values given. If the first and last y values
are same we can omit one of them.
2) Convolution Theorem
2 s
b) Fs [ f ( x )]
a s2
2
10) Property
d
a) Fs [ xf ( x )] Fc [ f ( x )]
ds
d
b) Fc [ xf ( x )] Fs [ f ( x )]
ds
11) Parsevals Identity
2 2
a) F ( s ) ds f ( x ) dx
2z 2z 2z
The equation of the form a 2 b c 2 f ( x, y)
x x y y
The above equation can be written as
aD2 bDD cD2 z f ( x, y ) .. (1)
2 2
where D 2 , D
2
and D 2 , D
2
x x y y
The solution of above equation is z = C.F + P.I
Complementary Function (C.F) :
Type: 2 If f ( x, y ) e ax by
1
P .I e ax by
( D , D )
1
P .I sin(ax by ) (or ) cos(ax by )
( D , D )
Here replace D 2 by a 2 , D 2 by b 2 and DD by ab . Do not
replace for D and D . If the denominator equal to zero, then
apply the same producer as in Type: 2.
Type: 4 If f ( x, y ) x m y n
1
P .I xm yn
( D , D )
1
xm yn
1 g ( D , D )
1 g ( D , D ) x m y n
1
ii) 1 x 1 x x 2 x 3 ...
1
iii) 1 x 1 2 x 3 x 2 4 x 3 ...
2
iv) 1 x 1 2 x 3 x 2 4 x 3 ...
2
v) (1 x )3 1 3 x 6 x 2 10 x 3 ...
vi) (1 x )3 1 3 x 6 x 2 10 x 3 ...
Type: 5 If f ( x, y ) e ax by V , where
V=sin(ax by ) (or) cos(ax by ) (or) x m y n
1
P .I e ax by V
( D , D )
First operate e ax by by replacing D by D a and D by D a .
1
P . I e ax by V , Now this will either Type: 3 or
( D a , D b )
Type: 4.
1
P .I y sin ax
( D , D )
1
y sin ax y c m2 x
D m1 D D m2 D
1
c m2 x sin ax dx (Apply Bernouilis method)
D m1 D
dz dz
In this type put u x ay , then p ,q a
du du
u( x , t ) A cos px B sin px Ce
2
p2 t
ii)
iii) u( x , t ) Ax B C
But the correct solution is ii), u( x , t ) A cos px B sin px Ce
2
p2 t
Unit V (Z - Transform)
1) Definition of Z-transform:
Let f ( n) be the sequence defined for all the positive integers n such that
Z f ( n ) f ( n ) z n
n 0
2)
Sl.No Z f ( n ) F [z]
Z 1
z
1.
z 1
z
2. Z ( 1)n
z 1
z
3. Z a n
za
z
4. Z n
z 1
2
z2
5. Z n 1
z 1
2
1 z
6. Z log
n z 1
n z
7. Z sin
2 z 1
2
n z2
8. Z cos
2 z2 1
3) Statement of Initial value theorem:
If Z f ( n) F [ z ] , then Lt F [ z ] Lt f ( n)
z n 0
5) Z a n f ( n) Z f ( n) z z
a
d
6) Z nf (n) z Z f ( n)
dz
7) Inverse Z-transform
Sl.No Z 1 F ( z ) f (n)
z
1. Z 1 1
z 1
z
2. Z 1 ( 1)n
z 1
z
3. Z 1 an
za
Prepared by C.Ganesan, M.Sc., M.Phil., (Ph: 9841168917) Page 10
Engineering Mathematics 2013
z
Z 1 a
n
4.
za
z
5. Z 1 n
z 1
2
z
6. Z 1 na n 1
z a
2
z
n a
n 1
7. Z 1
z a
2
z 2
n
8. Z 1 2 cos
z 1 2
z2 n
9. Z 1 2 2 a n cos
z a 2
z
10. Z 1 2 n
z 1 sin
2
z n
11. Z 1 2 2
a n1 sin
z a 2
9) a) Z [ y ( n )] F ( z )
b) Z [ y ( n 1)] zF ( z ) zy (0)
(Scan the above Q.R code for the direct download of this material)
UNIT 1
af (b ) bf (a )
1. Regula Falsi method: x1
f ( b ) f (a )
2. Newtons method (0r) Newton-Raphson method: x f ( xn )
n 1 xn
f / ( xn )
3. Fixed point iteration (or) Iterative formula (or) Simple iteration method
x g ( x ) , where x g ( x )
n 1 n
4. The rate of convergence in N-R method is of order 2.
5. Condition for convergence of N-R method is f ( x ) f // ( x ) f / ( x ) 2
6. Condition for the convergence of iteration method is g / ( x ) 1
7. Gauss elimination & Gauss-Jordon are direct methods and Gauss-Seidal and
Gauss-Jacobi are iterative methods.
8. Power method, To find numerically largest eigen value Yn 1 AX n
UNIT 2
1. Lagranges interpolation formula is
( x x1)( x x2 )...( x xn ) ( x x0 )( x x2 )...( x xn ) ( x x0 )( x x1)...( x xn1)
y f (x) y0 y1 ... yn
( x0 x1)( x0 x2 )...( x0 xn ) ( x1 x0 )( x1 x2 )...( x1 xn ) ( xn x0 )( xn x1)...( xn xn1)
dx X X0 h
d 3y 1 3 3 4
3
3 y 0 2 y 0 ...
dx X X0 h
d 3y 1 3 12v 18 4 x xn
3
3 y n y n ... where v
dx h 12 h
4. Newtons backward formula to find the derivatives at x xn
dy 1 2 y n 3 y n
y n ...
dx X X n h 2 3
d 2y 1 2 11 4
2 2 y n 3
y n y n ...
dx X X n h 12
d 3y 1 3
3
3 3 y n 4 y n ...
dx X X n h 2
5. Trapezoidal Rule
x0 nh
h
x f ( x )dx 2 y 0 y n 2 y1 y 2 y 3 ...
0
6. Simpsons 1/3rd Rule
x0 nh
h
x f ( x )dx 3 y 0 y n 4 y1 y 3 y 5 ... 2 y 2 y 4 y 6 ...
0
7. Simpsons 3/8th Rule
x0 nh
3h
x f ( x )dx 8 y 0 y n 3 y1 y 2 y 4 y 5 y 7 ... 2 y 3 y 6 y 9 ...
0
f ( x )dx f
1 3
f
3
11. Three points Gaussian Quadrature formula
5 3 3 8
1
1f ( x )dx 9 f 5 f 5 9 f (0)
12. If the range is not 1 ,1 then the idea to solve the Gaussian Quadrature
problem is x
ba ba
z
2 2
UNIT 4
First formula:
h k
k1 hf ( x0 , y 0 ) k3 hf x0 , y 0 2
2 2
h k
k2 hf x0 , y 0 1 k4 hf x0 h, y 0 k3
2 2
1
and y k1 2k 2 2k3 k 4 & y ( x0 h ) y 0 y
6
For the second formula, replace 0 by 1 in the above formula.
4h
5. Milnes Predictor formula: y n 1 y n 3 2y n/ 2 y n/ 1 2y n/
3
h
6. Milnes Corrector formula: y n 1 y n 1 y n/ 1 4 y n/ y n/ 1
3
7. Adams-Bashforth method:
h
Adams Predictor formula: y n 1 y n 55y n/ 59y n/ 1 37y n/ 2 9y n/ 3
24
h
Adams Corrector formula: y n 1 y n 9y n/ 1 19y n/ 5y n/ 1 y n/ 2
24
UNIT 5
y i 1 2y i y i 1 y y i 1
y i// ( x ) 2
& y i ( x ) i 1
/
h 2h
2. Solving one dimensional heat eqn. by Bender Schmidts method [Explicit
method]
2u u
The given equation a
x 2
t
k
then ui , j 1 ui 1, j (1 2 )ui , j ui 1, j , where
ah 2
1
for the above equation becomes,
2
1 a
ui , j 1 ui 1, j ui 1, j , where k h 2
2 2
3. Solving one dimensional heat eqn. by Crank-Nicolsons method [Implicit
method]
2u u
The given equation a
x 2 t
then ui 1, j 1 ui 1, j 1 2( 1)ui , j 1 2( 1)ui , j (ui 1, j ui 1, j )
k
where
ah 2
for 1 the above equation becomes,
1
ui , j 1 ui 1, j 1 ui 1, j 1 ui 1, j ui 1, j , where k ah
2
4
4. Solving one dimensional wave eqn.
2u 2 u
2
The given equation a
t 2 x 2
then ui , j 1 2 1 a ui , j a (ui 1, j ui 1, j ) ui , j 1,
2 2 2 2
k
where
h
for a 1, the above equation becomes,
2 2
h
ui , j 1 ui 1, j ui 1, j ui , j 1 , where k
a
2u 2u
The given eqn. 2 f ( x, y ) (or) 2u f ( x, y )
x 2
y
(Scan the above Q.R code for the direct download of this material)
2 F ( x) P X x x
F ( x) P X x f ( x )dx
3 Mean E X xi p( xi )
i Mean E X xf ( x )dx
4 E X 2 xi2 p( xi )
E X 2 x
2
i
f ( x )dx
5 Var X E X 2 E X Var X E X 2 E X
2 2
6 Moment = E X r xir pi
Moment = E X r x
r
i
f ( x )dx
7 M.G.F M.G.F
M X t E e tX e tx p( x )
x
M X t E e
tX
e
tx
f ( x )dx
4) E aX b aE X b
5) Var aX b a 2 Var X
6) Var aX bY a 2 Var X b2Var Y
7) Standard Deviation Var X
8) f ( x ) F ( x )
9) p( X a ) 1 p( X a )
p A B
10) p A / B , p B 0
p B
11) If A and B are independent, then p A B p A p B .
12) 1st Moment about origin = E X = M X t (Mean)
t 0
2nd Moment about origin = E X 2 = M X t
t 0
r
t
The co-efficient of = E X r (rth Moment about the origin)
r!
13) Limitation of M.G.F:
i) A random variable X may have no moments although its m.g.f exists.
ii) A random variable X can have its m.g.f and some or all moments, yet the
m.g.f does not generate the moments.
iii) A random variable X can have all or some moments, but m.g.f does not
exist except perhaps at one point.
14) Properties of M.G.F:
i) If Y = aX + b, then MY t e bt M X at .
ii) M cX t M X ct , where c is constant.
iii) If X and Y are two independent random variables then
M X Y t M X t M Y t .
15) P.D.F, M.G.F, Mean and Variance of all the distributions:
Sl. Distributio P.D.F ( P ( X x ) ) M.G.F Mean Variance
No. n
nc x p x q n x q pe
1 Binomial t
n np npq
2 Poisson
e x
e t 1
e
x!
3 Geometric
q x 1 p (or) q x p pe t 1 q
1 qe t p p2
4 Negative
Binomial
( x k 1)C k 1 p k p x p
k
kq kq
t p p2
1 qe
5 Uniform
1 e bt e at a b ( b a )2
, a xb
f ( x) b a ( b a )t 2 12
0, otherwise
6 Exponential
e x , x 0, 0 1 1
f ( x)
0, otherwise t 2
7 Gamma
e x x 1 1
f ( x) , 0 x , 0
( ) (1 t )
8 Weibull
f ( x ) x 1e x , x 0, , 0
1) pi j
ij 1 (Discrete random variable)
f ( x , y )dxdy 1 (Continuous random variable)
P x, y
2) Conditional probability function X given Y, P X x i / Y yi .
P( y)
P x, y
Conditional probability function Y given X , P Y yi / X xi .
P( x)
P X a,Y b
P X a / Y b
P (Y b )
f ( x, y)
3) Conditional density function of X given Y, f ( x / y) .
f ( y)
f ( x, y)
Conditional density function of Y given X, f ( y / x) .
f ( x)
b a
P X a , Y b f ( x , y )dxdy
0 0
7) P ( X Y 1) 1 P ( X Y 1)
Cov ( X , Y )
8) Correlation co efficient (Discrete): ( x , y )
X Y
1 1 1
Cov ( X , Y )
n
XY XY , X n
X 2 X 2 , Y
n
Y 2 Y 2
Cov ( X , Y )
9) Correlation co efficient (Continuous): ( x , y )
X Y
y
Regression line Y on X is y E ( y ) byx x E ( x ) , b yx r
x
Regression curve X on Y is x E x / y x f x / y dx
Regression curve Y on X is y E y / x y f y / x dy
dx
fY ( y ) f X ( x ) (One dimensional random variable)
dy
x x
u v
fUV ( u, v ) f XY ( x , y ) (Two dimensional random variable)
y y
u v
15) Central limit theorem (Liapounoffs form)
If X1, X2, Xn be a sequence of independent R.Vs with E[Xi] = i and Var(Xi) = i2, i
= 1,2,n and if Sn = X1 + X2 + + Xn then under certain general conditions, Sn
n n
follows a normal distribution with mean i and variance 2 i2 as
i 1 i 1
n.
16) Central limit theorem (Lindberg Levys form)
If X1, X2, Xn be a sequence of independent identically distributed R.Vs with E[X i]
= i and Var(Xi) = i2, i = 1,2,n and if Sn = X1 + X2 + + Xn then under certain
n 2 as n .
S n n X
Note: z ( for n variables), z ( for single variables)
n
n
UNIT-III (MARKOV PROCESSES AND MARKOV CHAINS)
1) Random Process:
A random process is a collection of random variables {X(s,t)} that are
functions of a real variable, namely time t where s S and t T.
4) Wide Sense Stationary (or) Weak Sense Stationary (or) Covariance Stationary:
A random process is said to be WSS or Covariance Stationary if it satisfies the
following conditions.
i) The mean of the process is constant (i.e) E X ( t ) constant .
ii) Auto correlation function depends only on (i.e)
RXX ( ) E X ( t ). X ( t )
5) Property of autocorrelation:
E X ( t ) lim RXX
2
(i)
(ii) E X 2 ( t ) RXX 0
6) Markov process:
A random process in which the future value depends only on the present value
and not on the past values, is called a markov process. It is symbolically
represented by P X (t n1 ) xn1 / X (t n ) xn , X (t n1 ) xn1 ... X (t 0 ) x0
P X ( t n1 ) xn1 / X ( t n ) xn
Where t0 t1 t2 ... tn tn1
7) Markov Chain:
If for all n , P X n an / X n1 an1 , X n 2 an 2 ,... X 0 a0
P X n an / X n1 an1 then the process X n , n 0,1, 2, ... is called the
markov chain. Where a0 , a1 , a2 ,...an ,... are called the states of the markov chain.
8) Transition Probability Matrix (tpm):
When the Markov Chain is homogenous, the one step transition probability is
denoted by Pij. The matrix P = {Pij} is called transition probability matrix.
9) Chapman Kolmogorov theorem:
If P is the tpm of a homogeneous Markov chain, then the n step tpm P(n) is
n
equal to Pn. (i.e) Pij( n ) Pij .
10) Markov Chain property: If 1 , 2 , 3 , then P and
1 2 3 1 .
(i) P 1 occurrence in ( t , t t ) t O t
(ii) P 0 occurrence in ( t , t t ) 1 t O t
(iii) P 2 or more occurrences in (t , t t ) O t
(iv) X ( t ) is independent of the number of occurrences of the event in any
interval.
et t
n
3) Ls
1
2
4) Lq
1
1
5) Ws
1
6) Wq
1
2) P0
n 0 n ! s !1
1 s
s 1
3) Lq P0
s.s ! 1 2
4) Ls Lq s
Lq
5) Wq
Ls
6) W s
s
s
8) The probability that an arrival enters the service without waiting = 1 P(an
arrival hat to wait) = 1 P N s
( s ) s 1 e t ( s 1 s )
9) P w t e t P
1 0
s !(1 )( s 1 s )
k 1 k 1
4) Ls
1 1 k 1
5) Lq Ls
Ls
6) W s
Lq
7) Wq
8) P a customer turned away Pk k P0
s n
P , n s
n! 0
3) Pn
s
n
s ! s n s P0 , s n k
s 1
4) Effective arrival rate: s s n Pn
n 0
s 1 k s k s k s 1
s
5) Lq P0
s ! 1 2 1
6) Ls Lq
Lq
7) Wq
Ls
8) W s
UNIT-V (NON MARKOVIAN & QUEUEING NETWORK)
2 Var ( t ) E ( t )
2
LS E ( t )
2 1 E ( t )
(or)
2 2 2
LS
2 1
2) Littles formulas:
2 2 2
LS
2 1
Lq LS
LS
WS
Lq
Wq
3) Series queue (or) Tandem queue:
The balance equation
P00 2 P01
1 P10 P00 2 P11
P01 2 P01 1 P10 2 Pb1
1 P11 2 P11 P01
2 Pb1 1 P11
then
k
i) The Jacksons flow balance equation j i Pij rj 0
i 1
(or)
Where C N 1
n1 n2 ... nk N
1n 2n ... kn
1 2 k
1n 2n kn
P n1 , n2 , ...nk C N
1 2 k
...
a1 a2 ak
1n 2n kn
1 2 k
Where C N 1 ...
n1 n2 ... nk N a1 a2 ak
ni ! , ni si
ai ni si
si ! s i , ni si
(Scan the above Q.R code for the direct download of this material)
2 F ( x) P X x x
F ( x) P X x f ( x )dx
3 Mean E X xi p( xi )
i Mean E X xf ( x )dx
4 E X 2 xi2 p( xi )
E X 2 x
2
i
f ( x )dx
5 Var X E X 2 E X Var X E X 2 E X
2 2
6 Moment = E X r xir pi
Moment = E X r x
r
i
f ( x )dx
7 M.G.F M.G.F
M X t E e tX e tx p( x )
x
M X t E e
tX
e
tx
f ( x )dx
4) E aX b aE X b
5) Var aX b a 2 Var X
6) Var aX bY a 2 Var X b2Var Y
7) Standard Deviation Var X
8) f ( x ) F ( x )
9) p( X a ) 1 p( X a )
p A B
10) p A / B , p B 0
p B
11) If A and B are independent, then p A B p A p B .
12) 1st Moment about origin = E X = M X t (Mean)
t 0
2nd Moment about origin = E X 2 = M X t
t 0
r
t
The co-efficient of = E X r (rth Moment about the origin)
r!
13) Limitation of M.G.F:
i) A random variable X may have no moments although its m.g.f exists.
ii) A random variable X can have its m.g.f and some or all moments, yet the
m.g.f does not generate the moments.
iii) A random variable X can have all or some moments, but m.g.f does not
exist except perhaps at one point.
14) Properties of M.G.F:
i) If Y = aX + b, then MY t e bt M X at .
ii) M cX t M X ct , where c is constant.
iii) If X and Y are two independent random variables then
M X Y t M X t M Y t .
15) P.D.F, M.G.F, Mean and Variance of all the distributions:
Sl. Distributio P.D.F ( P ( X x ) ) M.G.F Mean Variance
No. n
nc x p x q n x q pe
1 Binomial t
n np npq
2 Poisson
e x
e t 1
e
x!
3 Geometric
q x 1 p (or) q x p pe t 1 q
1 qe t p p2
4 Uniform
1 e bt e at a b ( b a )2
, a xb
f ( x) b a ( b a )t 2 12
0, otherwise
5 Exponential
e x , x 0, 0 1 1
f ( x)
0, otherwise t 2
6 Gamma
e x x 1 1
f ( x) , 0 x , 0
( ) (1 t )
7 Normal 1 x
2
t
t 2 2 2
1
2
f ( x) e e 2
1) pi j
ij 1 (Discrete random variable)
f ( x , y )dxdy 1 (Continuous random variable)
P x, y
2) Conditional probability function X given Y P X xi / Y yi .
P( y)
P x, y
Conditional probability function Y given X P Y yi / X xi .
P( x)
P X a,Y b
P X a / Y b
P (Y b )
f ( x, y)
3) Conditional density function of X given Y, f ( x / y) .
f ( y)
f ( x, y)
Conditional density function of Y given X, f ( y / x) .
f ( x)
b a
P X a , Y b f ( x , y )dxdy
0 0
6) Marginal density function of X, f ( x ) f X ( x )
f ( x , y )dy
7) P ( X Y 1) 1 P ( X Y 1)
Cov ( X , Y )
8) Correlation co efficient (Discrete): ( x , y )
X Y
1 1 1
Cov ( X , Y )
n
XY XY , X n
X 2 X 2 , Y
n
Y 2 Y 2
Cov ( X , Y )
9) Correlation co efficient (Continuous): ( x , y )
X Y
y
Regression line Y on X is y E ( y ) byx x E ( x ) , b yx r
x
Regression curve X on Y is x E x / y x f x / y dx
Regression curve Y on X is y E y / x y f y / x dy
dx
fY ( y ) f X ( x ) (One dimensional random variable)
dy
x x
u v
fUV ( u, v ) f XY ( x , y ) (Two dimensional random variable)
y y
u v
15) Central limit theorem (Liapounoffs form)
If X1, X2, Xn be a sequence of independent R.Vs with E[Xi] = i and Var(Xi) = i2, i
= 1,2,n and if Sn = X1 + X2 + + Xn then under certain general conditions, Sn
n n
follows a normal distribution with mean i and variance 2 i2 as
i 1 i 1
n.
16) Central limit theorem (Lindberg Levys form)
If X1, X2, Xn be a sequence of independent identically distributed R.Vs with E[X i]
= i and Var(Xi) = i2, i = 1,2,n and if Sn = X1 + X2 + + Xn then under certain
general conditions, Sn follows a normal distribution with mean n and variance
n 2 as n .
S n n X
Note: z ( for n variables), z ( for single variables)
n
n
UNIT-III (MARKOV PROCESSES AND MARKOV CHAINS)
1) Random Process:
A random process is a collection of random variables {X(s,t)} that are
functions of a real variable, namely time t where s S and t T.
Example: If Xn represents the outcome of the nth toss of a fair die, the {Xn : n1} is a
discrete random sequence. Since T = {1,2,3, . . . } and S = {1,2,3,4,5,6}
4) Wide Sense Stationary (or) Weak Sense Stationary (or) Covariance Stationary:
A random process is said to be WSS or Covariance Stationary if it satisfies the
following conditions.
i) The mean of the process is constant (i.e) E X ( t ) constant .
ii) Auto correlation function depends only on (i.e)
RXX ( ) E X ( t ). X ( t )
5) Time average:
T
1
The time average of a random process X ( t ) is defined as X T X (t ) dt .
2T T
T
1
If the interval is 0,T , then the time average is X T X (t ) dt .
T 0
6) Ergodic Process:
A random process X ( t ) is called ergodic if all its ensemble averages are
interchangeable with the corresponding time average X T .
7) Mean ergodic:
Let X ( t ) be a random process with mean E X ( t ) and time average X T ,
then X ( t ) is said to be mean ergodic if X T as T (i.e)
E X (t ) Lt X T .
T
(i) P 1 occurrence in ( t , t t ) t O t
(ii) P 0 occurrence in ( t , t t ) 1 t O t
(iii) P 2 or more occurrences in (t , t t ) O t
(iv) X ( t ) is independent of the number of occurrences of the event in any
interval.
et t
x
S XX R e
XX
i
d
2
XX
C XX ( ) RXX ( ) E X ( t ) E X ( t ) 0
2
XY
5) General formula:
e ax
i) e cos bx dx
ax
a cos bx b sin bx
a 2 b2
e ax
ii) e sin bx dx
ax
a sin bx b cos bx
a 2 b2
2
a a2
iii) x ax x
2
2 4
e i e i
iv) sin
2i
e i e i
v) cos
2
1) Linear system:
f is called a linear system if it satisfies
f a1 X 1 ( t ) a2 X 2 (t ) a1 f X 1 (t ) a2 f X 2 (t )
invariant system.
3) Relation between input X ( t ) and output Y ( t ) :
Y (t ) h(u) X (t u) du
SYY ( ) S XX ( ) H ( )
2
e
j t
If H ( ) is not given use the following formula H ( ) h( t ) dt
5) Contour integral:
e imx ma
a 2 x 2 a e (One of the result)
a
1 1 e
6) F 2 2
(from the Fourier transform)
a 2a
Application of test
i) To test the goodness of fit.
ii) To test the independence of attributes.
iii) To test the significance of discrepancy between experimental values and the
theoretical values.
Application of F test
i) To test whether there is any significant difference between two estimates of
population variance.
ii) To test if the two samples have come from the same population.
Application of t test
i) Test of Hypothesis about the population mean.
ii) Test of Hypothesis about the difference between two means.
iii) Test of Hypothesis about the difference between two means with dependent
samples.
iv) Test of Hypothesis about the observed sample correlation coefficient and sample
regression coefficient.
3. Define F variate.
Greater Variance
F
Smaller Variance
x x y y
2 2
S1
2
and S22 Here S12 and S 22 are Variance
n1 1 n2 1
and S 1 and S 2 are S.D.
4. Write down any two properties of chi Square distribution.
i) The mean and variance of the chi Square distribution are n and 2n respectively.
ii) As n , Chi Square distribution approaches a normal distribution.
iii) The sum of independent Chi Square variates is also a Chi Square variate.
Type 2 F test
Type 3 test
Goodness of fit
Prepared by C.Ganesan, M.Sc., M.Phil., (Ph: 9841168917) Page 2
Engineering Mathematics Material 2012
Independence of attributes
Students t test:
Single Mean (S.D given directly)
Where Sample Mean
Population
n Sample size
S.D Standard deviation
Degree of freedom = n 1
Single Mean (S.D is not given directly)
( )
Where Degree of freedom = n 1
Difference of Means
Where
[( ) ( ) ]
(or)
n1 s12 n2 s22
Where s 2
n1 n2 2
Degrees of freedom n1 n2 2
F - test:
Greater Variance
F
Smaller Variance
x x y y
2 2
S2
and S 2
Here S12 and S 22 are Variance and
1
n1 1 n2 1
2
S 1 and S 2 are S.D. (Capital S mention Population S.D and Small s mention Sample S.D)
a b a+b
c d c+d
a+c b+d N
( )( ) ( )( ) c+d
a+c b+d N
Degree of freedom r 1 s 1
S number of columns
x1 x2 x1 x2
Difference of Mean z (or)
12 22 s12 s22
n1 n2 n1 n2
Single Proportion
p P
z where p Sample proportion, P Population proportion, Q = 1 P
PQ
n
Difference of Proportion
p1 p2 n p n2 p2
z where p 1 1 and q 1 p
1 1 n1 n2
pq
n1 n2
(Scan the above Q.R code for the direct download of this material)
T T T T T T T T T T T T
T F F T F T T F F T F F
F T F F T T F T T F T F
F F F F F F F F T F F T
Negation
p p
T F
F T
p p p p p p
Idempotent law
p F p pT p
Identity law
pT T p F F
Dominant law
Complement
p pT p pF
law
Commutative
p q q p pq q p
law
Associative law p q r p q r p q r p q r
Distributive law p q r p q p r p q r p q p r
Absorption law p p q p p p q p
Demorgans law p q p q p q q p
Double p p
Negation law
pq p q
1.
pq q p
2.
p q pq
3.
4. p q p r p q r
5. p r q r p q r
1.
p q p q q p
pq p q
2.
3.
p q p q p q
4. p q p q
6) Tautological Implication:
A B if and only if A B is tautology. (i.e) To prove A B , it enough to prove
A B is tautology.
7) The Theory of Inferences:
The analysis of the validity of the formula from the given set of premises by using
derivation is called theory of inferences
8) Rules for inferences theory:
Rule P:
A given premise may be introduced at any stage in the derivation.
Rule T:
A formula S may be introduced in a derivation if S is tautologically implied by one or
more of the preceding formulae in the derivation.
Rule CP:
If we can drive S from R and a set of given premises, then we can derive R S from
the set of premises alone. In such a case R is taken as an additional premise (assumed
premise). Rule CP is also called the deduction theorem.
9) Indirect Method of Derivation:
Whenever the assumed premise is used in the derivation, then the method of derivation
is called indirect method of derivation.
10) Table of Logical Implications:
Name of Law Primal form
pq p
Simplification pq q
p p q
Addition q p q
p p q q
Disjunctive Syllogism
q p q p
Modus Ponens
p p q q
Modus Tollens p q q p
Hypothetical Syllogism p q q r p r
pq q p
Unit II (Combinatorics)
1) Principle of Mathematical Induction:
Let P ( n ) be a statement or proposition involving for all positive integers n.
Step 1: P (1) is true.
Step2: Assume that P ( k ) is true.
Step3: We have to prove P ( k 1) is true.
2) Principle of Strong induction.
Let P ( n ) be a statement or proposition involving for all positive integers n.
Step 1: P (1) is true.
Step2: Assume that P ( n ) is true for all integers 1 n k .
Step3: We have to prove P ( k 1) is true.
3) The Pigeonhole Principle:
If n pigeons are assigned to m pigeonholes and m n , than at least one pigeonhole
contains two or more pigeons.
4) The Extended Pigeonhole Principle:
If n pigeons are assigned to m pigeonholes than one pigeonhole must contains at least
n 1
1 pigeons.
m
5) Recurrence relation:
An equation that expresses an , the general term of the sequence an in terms of one or
more of the previous terms of the sequence, namely a0 , a1 ,...an1 , for all integers n is
called a recurrence relation for an or a difference equation.
6) Working rule for solving homogeneous recurrence relation:
Step 1: The given recurrence relation of the form
C0 (n)an C1 (n)an1 ... Ck (n)an k 0
Step 2: Write the characteristic equation of the recurrence relation
C 0 rn k C1 rn k 1 ... C k rn 0
Step 3: Find all the roots of the characteristic equation namely r1 , r2 ,...rk .
Step 4:
Case (i): If all the roots are distinct then the general solution is
an b1r1n b2 r2n ... bk rkn
Case (ii): If all the roots are equal then the general solution is
an b1 nb2 n2b3 ... r n
2) Monoid:
If a semi group G ,* has an identity element with respect to the operation *, then
Example: If N is the set of natural numbers, then N , and N , are monoids with
the identity elements 0 and 1 respectively. Z , and Z , are semi groups with
out monoids, where Z is the set of all positive even numbers
3) Sub semi groups:
If G ,* is a semi group and H G is called under the operation *, then H ,* is
Example: If the set E of all even non-negative integers, the E , is a sub semi group
iH i , i .
8) Lagranges theorem:
The order of each subgroups of a finite group is a divisor of a order of a group.
9) Cyclic group:
A group G ,* is said to be cyclic, if and element a G such that every element of
G generated by a. (i.e) G a 1, a , a 2 , ...a n e .
Example: G 1, 1, i , i is a cyclic group under the multiplication. The generator is
i , because i 4 1, i 2 1, i , i 3 i .
10) Normal subgroup:
A subgroup H of the group G is said to be normal subgroup under the operation *, if for
any a G , aH Ha .
11) Kernel of a homomorphism:
If f is a group homomorphism from G ,* and G , , then the set of element of G ,
which are mapped into e , the identity element of G , is called the kernel of the
homomorphism f and denoted by ker f .
12) Fundamental theorem of homomorphism:
If f is a homomorphism of G on to G with kernel K, then G / K is isomorphic to G .
13) Cayleys theorem:
Every finite group of order n is isomorphic to a permutation group of degree n .
14) Ring:
An algebraic system S , , is called a ring if the binary operations and on S
satisfy the following properties.
(i) S , is an abelian group
(ii) S , is a semi group
(iii) The operation is distributive over .
Example: The set of all integers Z , and the set of all rational numbers R are rings
under the usual addition and usual multiplication.
15) Commutative ring:
17) Field:
A commutative reing S , , which has more than one element such that every non-
zero element of S has a multiplicative inverse in S is called a field.
Example: The ring of rational numbers Q , , is a field since it is a commutative ring
and each non-zero element is inversible.
4) General formula:
i) glb a , b a * b a b
ii) l u b a , b a b a b
a*b a ab a
iii)
& a*b b & ab b
a b a*b a
iv) If
ab b
5) Properties:
Name of Law Primal form Dual form
Commutative
a*b b*a ab ba
law
Associative law a * b * c a * b * c a b c a b c
Distributive law p * q r p * q p * r p q * r p q * p r
Absorption law a * a b a a a * b a
Complement a * a 0 a a 1
Demorgans law a * b a b a b a * b
Double p p
Negation law
6) Complemented Lattices:
A Lattice L,*, is said to be complemented if for any a L , there exist a L ,
such that a * a 0 and a a 1 .
7) Demorgans laws:
a b a * b .
8) Complete Lattice:
A lattice L,*, is complete if for all non-empty subsets of L, there exists a glb and
lub.
9) Lattice Homomorphism:
Let L,*, and S , , be two lattices. A mapping g : L S is called lattices
homomorphism if g ( a * b ) g (a ) g ( b ) and g (a b ) g (a ) g (b ) .
10) Modular Lattice:
A lattice L,*, is said to be modular if for any a , b, c L
i) a c a b * c a b * c
ii) a c a * b c a * b c
11) Chain in Lattice:
Let L, be a Chain if
i) a b or a c and
ii) a b and a c
12) Condition for the algebraic lattice:
A lattice L,*, is said to be algebraic if it satisfies Commutative Law, Associative
Law, Absorption Law and Existence of Idempotent element.
13) Isotone property:
Let L,*, be a lattice. The binary operations * and are said to possess isotone
b c a*b a*c
property if .
ab ac