You are on page 1of 27

Signal Modeling & Estimation

Part 1: Random Process


M S Prasad

Signal Modeling & Optimal Processing

Part 1 Random Process


( This a part of 6 part series of Lecture Notes/ Handouts for Post Graduate Students .)
Page | 1

White Noise The term white noise was originally an engineering term and there are subtle, but important dierences in the way it is dened in various econometric texts. Here we dene white noise as a series of un-correlated random variables with zero mean and uniform variance (2 > 0). If it is necessary to make the stronger assumptions of independence or normality this will be made clear in the context and we will refer to independent white noise or normal or Gaussian white noise. Be careful of various denitions and of terms like weak, strong and strict white noise The argument above for second order stationery of Normal white noise follows for white noise. White noise need not be strictly stationary.

Some important Definitions

Joint Moments / Correlations

Page | 2

some books refer it as xy.

Where in case of RV being complex , the y * denotes the complex conjugate of Y. another important parameter related with Correlation is CO Variance which is defined as :where mx and my are ensemble average ( mean ) two random variables X & Y. If X , Y have zero mean it can be seen that they are equal to correlation. To make the Covariance invariant to scaling of data , it is frequently normalized as below

such normalized Covariance is known as CORRELATION CO EFFICIENTS. ( Some time it is also written as Sxy).
This coefficient has got great relevance in data forecasting , processing or signal extraction in an noisy environment etc. For Zero mean it is as shown below bounded by a value 1.

Important Deductions

Page | 3

and the cross correlation as

for all k and l .

Important Property

Page | 4

Independent Random variables ( Uncorrelated )


Two RVs are said to be Statistically independent , if their joint probability density is separable

fxy ( ) = separable.
.

fx () fy( ). Also it amounts to saying that their mean is

This would imply that the covariance would be Zero. Hence the two Independent RVs are Uncorrelated. One of the important aspect is that the variance is sum of two variances i.e:

Var(x+y) = Var(x) + Var ( Y).

A property related with zero covariance

is orthogonality. The two random variable are said to be Orthogonal if their correlation co efficient is zero.

Linear Mean square Estimators Let us consider a problem of estimating a RV Y , in terms of another random variable X. this situation comes when Y is not directly measurable , so we measure other function having some linear relation with y , using which we try to estimate or guess Y. In estimation process , we define some cost criteria which needs to be minimized to build the confidence in our estimate of Y . In case of Linear estimators we generally minimize , the Mean square Error ( MSE) ( y^ represents estimate of Y )

Page | 5

{ Note actually later in discussion of Optimum Filter we will find that optimum estimate is conditional mean Y^ = E ( Y|X) }

Basic Concept Linear Estimators ^ Let us assume that our estimator is of type Y = aX+b ( linear relation) we have to find out a and b which would minimize the mean square error
^

The minimum value can be found by differentiating the above equation with respect to a and b and setting them to zero.

If we check the first differential equation we get which implies that the error is Orthogonal to data sets used to estimate it that is X. Thisis one of the Fundamental property of Linear Estimators . By solving the above equations we get the values of a and b as under

Page | 6

Hence we can further find a simplified expression for b

here we have taken

And also

Hence the estimate of Y can be now be written as : -

and the Minimum mean Square error can be derived as

Important Deductions

Page | 7

Of Y using linear estimator. Bias and consistency

If the Bias is zero then the expected value of the estimate is equal to the true value and estimate is said to be Unbiased. For consistency the estimate must converge i.e.

Example

Page | 8

Here the mean is constant and auto correlation r(k,l) depends only in difference of time shifts k and l. Hence we see that a Harmonic process is a wide sense stationary process. Special Notes : (Matrix formulation of Auto Correlation )

is a vector p+1 values of process X [n], then we have the outer product as :

Hence the Auto correlation Matrix can be defined as :-

In similar way we can define Covariance matrix

Page | 9

The auto correlation matrix of X[n], WSS process , is a Hermitian Toeplitz matrix. It is non negative semi definite i.e Rx > 0.The eigenvalue of this matrix or a WSS process is always non negative and real valued. Example 2 Ref example 1. We had auto correlation as

Power Spectrum The power spectrum of a WSS sequence is given by its Fourier transform i.e.

Page | 10

Example 3 refer example 1 & 2

Example 4
Determine the Auto correlation Function of the process

Where t is white noise with variance 2 . Since it is white noise hence :-

Page | 11

Simple concept of filtering of Random Process

The mean of Y[n] can be written as

and Cross correlation as

And the auto correlation as

setting n-l = m

Page | 12

We have the relation :-

If we define rh (k) as deterministic correlation function of a Unit sample response h[n] then we have ,

and the deterministic autocorrelation of the filter

and the variance as .if h[n] is real then we have

Example 5. If a linear Shift Invariant system function is as below

Page | 13

Note

Auto Regressive Moving Average Process (Random Process Formulations )


Any regular WSS process can be realized as the output of a Causal & stable filter H[Z] that is driven by white noise of variance 2 . Some times it is also known as Innovation representation of the WSS process.

White Noise

Q[z]

X[n]

The inverse of Q[z] is known as whitening filter. The Q[z] or H[z] can be written as

Assuming a[0] and b[0] = 1 . the above equation can be expanded as a polynomial in Z i.e

H [z] = 1+b1z -1 + b2 z-2 + .. + bq z-q / 1+ a1 z-1 +..+ ap z-p this formulation is a representation of ARMA process ( p,q).
In this when q= 0 then the process is generated by filtering white noise by a all Pole filter H[z] = 1/ 1+ when it is known ar Auto Regressive ( AR) process of P. Page | 14

When we have p = 0 then we have a Moving Average Process MA (q) then

H(z) = 1+ How to find the co efficients ap and bq


Let us say we have to model a signal having p poles , q zeroes . It is always possible to find a filter co efficient for h[n] = x[n] for p+q+1 values of n . ( It is due to linear convolution using FFT). For simplicity assume

hence we get

In matrix form we can write this as

The two step process to solve this as under : -

Page | 15

Example 6

1. To find a Second order all pole model p=2 , q =0,The eqn to be solved

in short form which in present case is

hence a(1) and a(2) are -1.50 and +1.50. Since a(0) and b(0) are zero we have

Case 2

Page | 16

Case 3

hence

Having known a1

A moving average process of order k is a process Xt that may be described by the equation

Page | 17

Xt =b0t +b1t1 ++bk tk , t k .


The coefcients b0, b1, . . . , bk are real and t , t Z, is white noise with mean and standard deviation . The value Xt of the process at time t is the weighted sum of the k +1 immediately preceding values of the white noise process t . The notation for this process is MA(k ), where the acronym MA stands for moving average. Without loss of generality it may be assumed that b0 is scaled so that b0 = 1. By taking the expectation of both sides of above equation function m(t )= E (Xt) that m(t )= we have for the mean value

(b0 +b1 ++bk ) , t = k , k +1, . . . Obviously m(t )=m is constant. Xt = Xt m and t = t .

Dene the centered processes

By subtracting above equations we get


Xt = b0 t +b1 t1 ++bk tk , Squaring both sides of this equality yields :t = k , k +1, . . . .

Taking the expectation of both sides and using the fact that t is white noise we nd that

It is inferred from above that the variance varX(t ) does not depend on t . By taking the expectation of both sides :-

For t > s we have

Page | 18 Above equation shows that shows that R(t , s ) only depends on the difference t s of its arguments. The process Xt hence is wide sense stationary with covariance function

Example 7

Auto Regressive Process : A random process {Xt, t Z} is called an autoregressive of


order p 1 (AR(p)) if

where t is white noise.


Case 1 Consider {Xt, t Z} an AR(1)process, i.e.

Page | 19

Let t be White Noise. Xt is an AR(1) Process then

E [ x(t) ] = 0 Auto co variance is given as under : -

Which is independent of t. hence AR (1) process is wide sense Stationary.

An AR(p) process is dened as

Page | 20

where L xt = X t-1

Some Important Results ( FIR type)

Page | 21

-------------------------------------------------------------------------------------------------Advanced problem Solutions

Page | 22

Example 2 ( Linear Prediction in Noise )

Page | 23

Example Multi step Prediction : In Multistep prediction x(n+) is predicted in terms of Linear combination of p values of x(n) , x(n-1) , x(n-p+1) .

By substituting K = 0, 1,2 3, 4 ,5 6 7.

Page | 24

------------------------------------------------------------------------------------------------------------------------------------------

Causal Weiner Filter ( I I R ) ( ref part 2 )

Page | 25

Page | 26

v __________________________________________________________________________---Reference Statistical Signal Processing : SM Kay Statistical Signal Processing & prediction : M H Hays Open source on Web .

You might also like