You are on page 1of 10

Estimation and Detection (ET 4386)

Exercises
Outline

The Bayesian philosophy


General Bayesian estimators
Bayesian estimators
Exercise 1: the minimum mean square error (MMSE) and maximum a posteriori
(MAP) estimators
Exercise 2: different kinds of Bayesian estimators
Exercise 3: nonlinear Bayesian estimators
The linear MMSE (LMMSE) estimator
Exercise 4: the LMMSE for prediction
Exercise 5: the LMMSE for smoothing
1

The Bayesian philosophy

The unknown parameter is viewed as a random variable, and we estimate its


particular realization.
Besides the observation x, we have additional information about , its prior pdf
p().
We look for the estimate of which minimizes the Bayesian mean square error


arg min E ( )2 = arg min

ZZ

( )2 p(x, )dxd

As a result we obtain the mimimum mean square error (MMSE) estimator


= E(|x) =

p(|x)d

General Bayesian estimators

We look for the estimate of , which minimizes the Bayes risk




arg min E C( ) = arg min

C( )p(|x)d

where the cost function C() with = can take many different forms.
The hit or miss cost function

0, ||
C() =
1, || >

leads to the maximum a posteriori estimator (MAP)


= arg max p(|x)

Exercise 1
The MMSE and MAP estimators

For the posterior pdf







1
1
p(|x) = exp ( x)2 + exp ( + x)2
2
2
2
2

Plot the pdf for =

1
2

and = 34 . Next, find the MMSE and MAP estimators of for

the same values of .

Exercise 2
Different kinds of Bayesian estimators

Let the observation x have conditionally uniform density

1, 0 < x

p(x|) =
0, otherwise.
where is a random variable with density

exp(),
0
p() =
0,
otherwise.

A useful formula: for 0,

Z
u exp(u)du = ( + 1) exp().

(a) Find the MAP estimator of .


(b) Find the MMSE estimator of .
(c) Find the minimum mean absolute error estimator of .
5

Exercise 3
Nonlinear Bayesian estimators

Consider the quadratic estimator


= ax2 [0] + bx[0] + c
for a scalar parameter based on the single data sample x[0]. Find the coefficients a, b, c that minimize the Bayesian MSE.
If x[0] is uniformly distributed in the range of [ 21 , 12 ] and = cos(2x[0]), find the
LMMSE estimator and the quadratic MMSE estimator. Compare the minimum
MSEs.

Linear minimum mean square error (LMMSE) estimator

The LMMSE is linearly related to the observations as


= aN +

N
1
X

an x[n]

n=0

where x is a random vector with mean E(x) and covariance matrix Cxx , and is
a random vector with mean E() and covariance matrix C .

Designing the coefficients {an }n , the LMMSE is given by


= E() + CxC1
xx (x E(x))

Linear minimum mean square error (LMMSE) estimator

The linear system model is given by,


= H + w
x
where w is a random vector with zero mean and covariance matrix Cww , and
is a random vector with mean E() and covariance matrix C .
and w are uncorrelated.

The LMMSE in closed form is given by


1
T
T

= E() + C H HC H + Cww
(x HE())

1 1 T 1
T
1
= E() + HCww H + C
H Cww (x HE())

Exercise 4
The LMMSE for prediction

Consider an AR(N ) process


x[n] =

N
X
k=1

a[k]x[n k] + u[n]

where u[n] is white noise with variance 2 . Prove that the optimal one-step linear
predictor of x[n] is
x
[n] =

N
X
k=1

Also find the MMSE.

a[k]x[n k]

Exercise 5
The LMMSE for smoothing

We observe the data x[n] = s[n] + w[n] for n = 0, 1, . . . , N 1, where s[n] and w[n]
are zero mean WSS random processes which are uncorrelated with each other.
The ACFs are
rss [k] = s2 [k]
2
[k]
rww [k] = w

Determine the LMMSE estimator of s = [s[0], s[1], . . . , s[N 1]]T based on the ob-

servation x = [x[0], x[1], . . . , x[N 1]]T . Determine also the corresponding minimum MSE matrix.

10

You might also like