You are on page 1of 6

EE 562a: Random Processes in Engineering

EE department, USC, Fall 2014


Instructor: Prof. Salman Avestimehr
Homework 2
Solutions
1. (Exponential Distribution)
(a) Let X be a continuous positive random variable. Show that
E[X] =


0
P(X > t) dt.
(b) Let X be exponentially distributed with parameter . Use the result in (a) to determine
the mean of X.
(c) The exponential distribution is often a reasonable model for waiting times. Suppose
that the time T that a transistor takes to fail is exponentially distributed with mean 1/.
Show that given that the transistor is still working after t time units, the chance that it
lasts an additional s time units is independent of t, i.e.,
P(T > t + s|T > t) = P(T > s).
This is called the memoryless property of the exponential distribution.
Problem 1 Solution
(a)


0
P(X > t)dt =


t
f
X
(x)dxdt
=

x
0
f
X
(x)dtdx
=


0
xf
X
(x)dx = E[X]
(1)
(b)
P(X > t) =


t
e
x
dx = e
t
E[X] =


0
P(X > t)dt =


0
e
t
dt
=
1

e
t
|

0
=
1

(2)
(c)
P(T > t + s|T > t) =
P[(T > t + s) (T > t)]
P(T > t)
=
P(T > t + s)
P(T > t)
=
e
(t+s)
e
t
= e
s
= P(T > s)
(3)
2. (a) Suppose that random variable X has PMF
p
X
(x) =

1/2 if x = 0
1/4 if |x| = 1
Compute the PMF of the random variable
Y = p
X
(X).
(b) The entropy of a discrete random variable is dened as
E[log
2
(p
X
(X))],
and is an important quantity in the design of data compression algorithms like gzip and
mp3. Compute the entropy of X in the following cases
i. X is a Bernoulli random variable with mean p.
ii. X is uniformly distributed in {1, 2, . . . , n}.
iii. X is geometric random variable with mean > 1.
Problem 2 Solution
(a) Since p
X
(X) takes two values (p
X
(0)) = 1/2 and p
X
(1) = p
X
(1) = 1/4), Y takes
the same two values 1/4 and 1/2. Also, P(Y = 1/2) = P(X =0) = 1/2, and P(Y = 1/4)
=P(X = 1)+P(X = -1) = 1/2. Thus
p
Y
(y) =

1/2 if y = 1/4
1/2 if y = 1/2
(b) The entropy of a discrete randomvariable is dened as E[log
2
(p
X
(x))] =

x
(log
2
(p
X
(x)))p
X
(x).
i. X has PMF
p
X
(x) =

1 p if x = 0
p if x = 1
Thus
E[log
2
(p
X
(x))] =

x
(log
2
(p
X
(x)))p
X
(x) = plog
2
p (1 p)log
2
(1 p).
2
ii. In this case, p
X
(x) = 1/n for x = 1, ..., n Thus
E[log
2
(p
X
(x))] =

x
(log
2
(p
X
(x)))p
X
(x) =
n

i=1
1
n
(log
2
1
n
)
= n
1
n
(log
2
1
n
)
= log
2
n.
iii. Since X has mean , the parameter of the corrsponding geometric PMF is 1/.
Thus p
X
(k) = (1
1

)
k1 1

for k 1. Thus
E[log
2
(p
X
(x))] =

x
(log
2
(p
X
(x)))p
X
(x) =

k=1
p
X
(k)(log
2
((1
1

)
k1
1

))
=

k=1
p
X
(k)(log
2
(1
1

)
k1
log
2
(
1

))
=

k=1
p
X
(k)(log
2
(1
1

)
k1
) +

k=1
p
X
(k)(log
2
(
1

))
=

k=1
p
X
(k)(k 1)(log
2
(1
1

) (log
2
1

k=1
p
X
(k)
= log
2
(1
1

k=1
(k 1)p
X
(k) (log
2
1

k=1
p
X
(k)
= log
2
(1
1

)E[X 1] (log
2
1

) 1
= log
2
(1
1

)(E[X] 1) (log
2
1

)
= ( 1)log
2
(1
1

) (log
2
1

)
3. A tail probability for a random variable X is a probability of the form
P(X a).
Suppose P(X 0) = 1. Obtain upper bounds on the tail probability P(X a) for all
a E[X] using both Markovs inequality and Chebyshevs inequality. Which bound is
better for large a?
Problem 3 Solution
Since X takes nonnegative values with probability 1, we may apply Markovs inequality to
obtain
P(X a)
E[X]
a
3
Since a E[X], we have
P(X a) = P(X E[X] a E[X])
= P(|X E[X]| > a E[X])
Chebyshevs inequality then gives
P(X a)
V ar[X]
(a E[X])
2
When a is large, Chebyshevs inequality yields a better bound since the denominator in the
Chebyshev bound increases much faster (it is quadratic in a) than the denominator in the
Markov bound (linear in a).
4. (Chebyshev Inequality) Let X
1
, . . ., X

be independent Geometric random variables with


parameters p
1
, . . . , p

respectively (i.e., P(X


i
= k) = p
i
(1 p
i
)
k1
, k = 1, 2, . . ..
Let random variable X be
X =

i=1
X
i
.
(a) Find
X
= E[X].
(b) Apply the Chebyshev inequality to upper bound P(|X
X
| >
X
). Evaluate your
upper bound for the case that p
1
= = p

= p. What happens as ?
Problem 4 Solution
(a) By using linearity of expectation we get

X
= E[X] =

i=1
E[X
i
] =

i=1
1
p
i
(b)
P(|X
X
| >
X
)
VAR[X]

2
X
=
1

2
X

i=1
1 p
i
p
2
i
If p
1
= = p

= p, then
VAR[X]

2
X
=
(
1p
p
2
)

2
X
=
(
1p
p
2
)
(

p
)
2
=
1p

. Therefore, as ,
VAR[X]

2
X
0. This implies that as , P(0 X 2
X
) 1.
5. (Minimum of Independent Exponential Random Variables) Assume that T
1
, . . ., T
n
are
independent random variables and that each T
i
is exponentially distributed with mean 1/
i
,
i = 1, . . . , n. Let
T = min(T
1
, . . . , T
n
).
(a) Show that T is exponentially distributed. What is its mean?
4
(b) Let the random variable K indicate the index of the T
i
that is the minimum. Show that
P(K = k) =

k

n
i=1

i
.
Problem 5 Solution
(a)
P(T > t) =
n

i=1
P(T
i
> t)
=
n

i=1
e

i
t
= e

P
n
i=1

i
t
P(T t) = 1 P(T > t) = 1 e

P
n
i=1

i
t
f
T
(t) = (
n

i=1

i
)e

P
n
i=1

i
t
(4)
(b) Dene T
m
to be the minimum of all RVs but T
k
.
P(K = k) = P(T
k
T
m
) =


0
P(T
m
T
k
|T
k
= t
k
)f
T
k
(t
k
)dt
k
=

k

n
i=1

i
(5)
6. (Joint PDF and CDF) Two random variables X and Y have joint pdf:
f
X,Y
(x, y) = csin(x + y) 0 x /2, 0 y /2
(a) Find the value of the constant c.
(b) Find the joint cdf of X and Y .
(c) Find the marginal pdfs of X and of Y .
Problem 6 Solution
(a)

2
0

2
0
c sin(x + y)dxdy = 1
c

2
0
cos(x + y) |

2
0
dy = 1
1 = c

2
0
(cos(y) cos(

2
+ y))dy = c

2
0
(cos(y) sin(y))dy
= 2c sin(y) |

2
0
c =
1
2
(6)
5
(b)
F
X,Y
(x, y) =

y
0

x
0
1
2
sin(u + v)dudv
=

y
0

1
2
cos(u + v) |
x
0
dv =
1
2

y
0
(cos(v) cos(x + v))dv
=
1
2
(sin(v) sin(x + v)) |
y
0
=
1
2
(sin(x) sin(x + y) + sin(y))
(7)
(c)
f
X
(x) =

2
0
1
2
sin(x + y

)dy

=
1
2
cos(x + y

) |

2
0
=
1
2
(sin(x) + cos(x))
f
Y
(y) =
1
2
(sin(y) + cos(y))
(8)
6

You might also like