Professional Documents
Culture Documents
Solution
Make sure your reasoning and work are clear to receive full credit for each problem.
1. 4 points. Consider our standard coin flipping problem where you have an unknown coin,
either fair (HT) or double headed (HH), and you observe the outcome of n flips of this coin.
Assume a uniform cost assignment. For notational consistency, let the state and hypothesis
x0 and H0 be the case when the coin is HT and x1 and H1 be the case when the coin is HH.
When n = 2, find the Neyman-Pearson decision rule and corresponding power for a false
alarm probability 0 < < 1. Repeat this for n = 3 and comment on any changes.
Solution: When n = 2, we can form the conditional probability matrix as
0.25 0
P = 0.5 0
0.25 1
0
L = 0 .
4
(y) =
1 if L(y) > 0
.
0 if L(y) 0
0.125
0.375
P =
0.375
0.125
0
0
0
1
0
0
L=
0 .
8
NP
(y) =
if L(y) = 8
.
0 if L(y) < 8
where
P
:L >v P,0
= 8.
= P
:L =v P,0
and the power of the test = PD = 8. Note that the additional observation effectively
doubles the power of the test with respect to the n = 2 case when 0 < < 0.125.
Case 2: = 0.125 v = 0
NP
(y) =
1 if L(y) > 0
.
0 if L(y) 0
1 if 2(y+1) > v
3
=v ,
if 2(y+1)
N P (y) =
0 if 3 < v
2(y+1)
where =
3
2v
1 if y <
N P (y) =
if y = ,
0 if y >
Pfp (
) = P0 (Y < ) =
2
(y + 1)dy =
2
3
if 0
+ 1 if 0 < < 1 .
if 1
which is =
2
+ 1 = ,
3 2
1 + 3 1. Hence, the -level Neyman-Pearson test is
1 if y 1 + 3 1
NP
(y) =
0 if y > 1 + 3 1
= PD (
)=
dy = =
1 + 3 1,
0 < < 1.
1 + (y + s)2
.
1 + (y s)2
L(y)
0
10
0
y
10
y|
s(v + 1)
)
p
4s2 v (v 1)2
s(v + 1) + 4s2 v (v 1)2
<y<
v1
v1
Case 2: v = 1,
1 = {y | 0 < y < }
2
0
If p > , then v > 1. If p = , then v = 1. Otherwise, 0 < v < 1. If you are in Case 1 or
Case 3, you will need to numerically determine v by integrating p0 (y) over the appropriate
critical region 1 such that
Z
p0 (y)dy = .
y1
Qn
(yk 1 )2 /212
1
k=1 21 e
Qn
(yk 0 )2 /202
1
k=1 20 e
(
2
n
n 0 21
0
1
exp
2
exp
2
1
2 0
1
202
1
2
21
X
n
k=1
yk2
exp
(
1 0
12 02
X
n
k=1
yk
(b) If 1 = 0 and 12 > 02 , then we can simplify the comparison L(y) > v. Skipping
the algebraic details, the Neyman-Pearson test in this case can be written as
Pn
2
2
n
0
k=1 (yk ) <
where is the appropriate threshold selected to satisfy the false positive probability
constraint.
Alternatively, if 1 > 0 and 12 = 02 , then the NP test can be written in the form
Pn
1
k=1 yk >
P
n
N P (y) =
0/1
yk =
Pnk=1
0
k=1 yk <
where is the appropriate threshold selected to satisfy the false positive probability
constraint. Note that, in the first case, the test statistic is quadratic in the observations,
and in the second case it is linear.
(c) For n = 1, 1 = 0 , and 12 > 02 , the NP
1
NP
(y) =
0/1
= 2Q
0
where Q(x)
isthe usual Q-function. Thus, for a test with significance-level we have to
solve 2Q 0 = which can be done in Matlab numerically or, even better, with the
erfcinv function. The detection probability is
PD (N P ) = Prob[(Y1 )2 > | x1 ]
= 2Q
1
5. 4 points. Poor textbook Chapter III, Problem 3. Also, try part (a) for the case when the
noise is distributed as N (0, ) where Rnn is the covariance matrix of the noise.
Solution:
(a) Since this is an M -ary hypothesis testing problem with equiprobable priors, and we wish
to minimize error probability, we will have the critical regions:
n
max
pm (y)
k =
y R | pk (y) =
m{0,...,M 1}
Since pm (y) has the density N (sm , 2 I), this critical region can be reduced to
n
2
2
k =
y R | ky sk k =
min
ky sm k
m{0,...,M 1}
T
T
n
max
sm y .
=
y R | sk y =
m{0,...,M 1}
Intuitively, the detector here is just computing the deterministic correlation between
each signal vector sm and the observation vector y and selecting the one that is largest.
The minimum error probability decision rule is simply
= arg
max
m{0,...,M 1}
sTm y
If we have noise that is distributed as N (0, ), we can use the decorrelation trick discussed in Lecture 4. We factor = S S and do a coordinate transformation on the
observation and the signal vectors so that the noise in the transformed coordinate space
is white. Then all of the above analysis applies directly.
(b) Under the assumption of M equiprobable signal vectors, the error probability can be
written as
M
1 X
Prob[Y ck | xk ]
Pe =
M
k=0
where ck = Y\k and xk means that signal vector sk was sent. We can write
c
T
Prob[Y k | xk ] = 1 Prob[Y k | xk ] = 1 Prob arg
max
sm Y = k | xk .
m{0,...,M 1}
Prob
max
sTm Y < z | xk e(zks1 k )/2 ks1 k dz
m{0,...,k1,k+1,...,M
}
2ks1 k
Now
Prob
max
m{0,...,k1,k+1,...,M }
sTm Y
< z | xk
m{0,...,k1,k+1,...,M }
=
z
ks1 k
M 1
Prob sTm Y < z | xk
where the first inequality is a consequence of the independence of the individual deterministic correlations and (x) is the usual CDF of a zero-mean, unit variance Gaussian
random variable.
Combining the above and setting x = z/ks1 k yields
Z
2
1
[(x)]M 1 e(xd) /2 dx
1 Prob [Y k | xk ] =
2
for k = 0, . . . , M 1. Since these are all the same, the desired expression for Pe immediately follows.