You are on page 1of 8

IEOR E3658

Practice Final - Solution

Fall 2014

Remark 1. Notice that these are suggested solutions. There are more than one (correct) way to
reach to the solution.
1. A class has N slots available, which will be lled on a First-Come-First-Serve basis. Each
student who will register can be one of four types: freshman(type 1), sophomore (type 2),
junior (type 3) or senior (type 4). It is known that the composition of the class usually
is given by 10%, 20%, 30%, 40%, respectively (corresponding to the four types mentioned
before). Let Xi be the number of students of type i, i = 1, 2, 3, 4.
(a) (5 points) What is the joint distribution of (X1 , X2 , X3 , X4 )?

Solution
(X1 , X2 , X3 , X4 ) M ultinomial (N ; 0.1, 0.2, 0.3, 0.4)

(b) (10 points) What is the distribution of X2 + X3 ?

Solution We can dene a " success" if the student is either of type 2 (sophomore) or
type 3 (junior). The probability of " success" is therefore
P (success) = P ("type 2" or "type 3")

= P ("type 2") + P ("type 3") = 0.5

Therefore X2 + X3 Bin (N, 0.5)


(c) (10 points) Suppose now that the probability of attendance of any given lecture per
student is as follows: if the student is a freshman than the probability of attendance is
80%, if the student is a sophomore the probability of attendance is 70%, if the student
is a junior the probability of attendance is 60%, if the student is a senior the probability
of attendance is 50%.
Suppose that N = 100. If in a given day 60 students have attended, what is the
distribution of the number of juniors that have attended class on such day?

Solution.
We can dene an event A = {student attended}. Notice that
P (type 3|A) =

P (A| type 3) P (type 3)


4
P
P (A| type i) P (type i)
i=1

(0.6) (0.3)
(0.6) (0.3)
=
= 0.3
(0.8) (0.1) + (0.7) (0.2) + (0.6) (0.3) + (0.5) (0.4)
0.6

If we dene "success" as "Student is a junior given he/she attended" then we have


that the distribution of the juniors that attended the class that day is distributed
Bin (60, 0.3).
1

IEOR E3658

Practice Final - Solution

Fall 2014

2. Let (X, Y ) N (~, ) where


=

X
Y

and

2
X
X,Y

X,Y
Y2

(a) (10 points) Let V = aX + bY . What is the distribution of V ?

Solution.
We can represent (X, Y ) in term of Z, W iid N (0, 1) random variables.
q


X =X + X X,Y Z + 1 2X,Y W
Y =Y + Y Z

where
X,Y =

X,Y
X Y

Therefore,
q

2
V = aX + bY = aX + aX X,Y Z + 1 X,Y W + bY + bY Z
q


= aX + bY + (aX X,Y + bY ) Z + aX 1 2X,Y W


Since Z and W are independent Normal random variables any linear combinations of
the two is also a Normal random variable. Therefore,

q

2 
2
2
V N aX + bY , (aX X,Y + bY ) + aX 1 X,Y

(b) (10 points) What is the joint distribution of (Y, V )?

Solution.
Since both Y and V has a marginal Normal distribution, the vector (Y, V ) is Bivariate
Normal. 


~
(Y, V ) N
, where

Y
V

and

Y2
Y,V

Y,V
V2

IEOR E3658

Practice Final - Solution

Fall 2014

Y and Y2 are known.


V = aX + bY
q
2

V2 = (aX + bY X,Y )2 + bY 1 2X,Y
2
= a2 X
+ 2abX Y X,Y + b2 Y 2X,Y + b2 Y2 + b2 Y2 1 2X,Y

2
= a2 X
+ 2abX,Y + b2 Y2

The only parameter we are missing is the covariance Cov (Y, V ).


Since Cov (T, ) = 0 for any random variable T and a constant we have that
Y,V = Cov (Y, V ) =
q
 


= Cov Y Z, (aX X,Y + bY ) Z + aX 1 2X,Y W

= aX,Y + bY2 cov (Z, Z)
= aX,Y + bY2

Here we used the fact that Z and W are independent, therefore Cov (Z, W ) = 0,and
that Z N (0, 1), therefore Cov (Z, Z) = V ar (Z) = 1
(c) (10 points) What is the conditional distribution of Y given V = 1?

Solution
iid N (0, 1).
We represent Y, V using Z ,W
q



Y = Y + Y Y,V Z + 1 2Y,V W
V = V + V Z
V = 1 implies Z =

1V
V

. Therefore,



q
q
1 V
Y
2

Y = Y + Y Y,V
+ 1 Y,V W = Y + Y,V (1 V )
+ Y 1 2Y,V W
V
V

Therefore,


Y |V = 1 N

Y + Y,V


Y 2
(1 V )
, Y 1 2Y,V
V

IEOR E3658

Practice Final - Solution

Fall 2014

(d) (10 points) What is the conditional distribution of cX + dY given V = 1?

Solution
Recall from part (a): We represented (X, Y ) in term of Z, W iid N (0, 1) random variables.
q


2
X =X + X X,Y Z + 1 X,Y W
Y =Y + Y Z

where
X,Y =

X,Y
X Y

And found that


q


V = aX + bY + (aX X,Y + bY ) Z + aX 1 2X,Y W

Therefore V = 1 implies
Z=

q
aX 1 2X,Y

1 aX bY

W
(aX X,Y + bY ) (aX X,Y + bY )

Expressing cX + dY in terms of Z, W we have:


cX + dY = cX + dY + (cX X,Y

q

2
+ dY ) Z + cX 1 X,Y W


Therefore
cX + dY |V = 1 =
1 aX bY
= cX + dY + (cX X,Y + dY )
(aX X,Y + bY )
q
q


aX 1 2X,Y
(cX X,Y + dY )
W + cX 1 2X,Y W
(aX X,Y + bY )

Therefore,
cX + dY |V = 1

1 aX bY
N cX + dY + (cX X,Y + dY )
,
(aX X,Y + bY )
q

2
2
q


aX 1 X,Y

(cX X,Y + dY )
+ cX 1 2X,Y
(aX X,Y + bY )

IEOR E3658

Practice Final - Solution

3. Consider the following density: fX,Y (s, t) =

1
I
t1

Fall 2014

(2 t 3) I (1 s t)

(a) (10 points) Compute the marginal density of Y , i.e., fY (t)

Solution
To get te marginal of the random variable Y we need to integrate the joint density over
all possible values X can take.
Z
fY (t) =

Zt
fX,Y (s, t) ds =

1
I (2 t 3) ds = I (2 t 3)
t1

Therefore Y U nif (2, 3).


(b) (10 points) Compute the conditional density of X given Y , i.e., fX|Y (s|t)

Solution
Here we will use the connection between the joint, marginal and conditional density.
fX|Y

fX,Y (s, t)
(s|t) =
=
fY (t)

1
I
t1

(2 t 3) I (1 s t)
1
=
I (1 s t)
I (2 t 3)
t1

Therefore, X|Y U nif (1, Y ).


(c) (10 points) Compute Cov (X, Y )

Solution
Recall that : Cov (X, Y ) = E (XY ) E (X) E (Y )
E (XY ): Since we know that X|Y U nif (1, Y ) we will use the Law of Iterated Expectation and condition on Y :


Y +1
E (XY ) = E (E (XY |Y )) = E (Y E (X|Y )) = E Y
2
2
2
 1
1
1 2 +23+3
1 2+3
43
= E Y 2 + E (Y ) =
+
=
2
2
2
3
2
2
12
E (X): Again, since we know the conditional distribution X|Y we will use the Law of
Iterated Expectation and condition on Y :

E (X) = E ((X|Y )) = E

Y +1
2


=

1 3+2 1
7

+ =
2
2
2
4

Cov (X, Y ): Cov (X, Y ) = E (XY ) E (X) (Y ) =

43
12

74

5
2

* Here we used the fact that if T U nif (a, b) then E (T ) =

= 19
24
a+b
2

and E (T 2 ) =

a2 +ab+b2
3

IEOR E3658

Practice Final - Solution

Fall 2014

4. Let X U nif (1, 2) and let Y = ln X


(a) (10 points) Find the pdf of Y (Don't forget to specify what values Y can take)
Solution
The range of Y : Since X takes values in the interval (1, 2), we have that Y = ln X
takes values in (0, ln 2).
The cdf of Y : X U nif (1, 2) therefore the cdf of X is

FX (s) = P (X s) =

s1

21

s<1
=s1 1s<2
2s

To nd the cdf of Y , FY (t) = P (Y t) we will start at the boundaries. Since


Y (0, ln 2) we have that
FY (t) = 0

for t 0

FY (t) = 1

for t ln 2

For t (0, ln 2) we have



FY (t) = P (Y t) = P (ln X t) = P X et = et 1

where here we used the fact that: t (0, ln 2) et (1, 2). to summarize:

0
t<0

FY (t) = P (Y t) = et 1 0 t < ln 2

1
ln 2 t
pdf of Y : To obtain the density of Y we derive the cdf:
fY (t) =

dFY
(t) = et I (0 t ln 2)
dt

IEOR E3658

Practice Final - Solution

Fall 2014

(b) (10 points) Compute the mean and variance of Y .


Solution
Z

ln 2

E (Y ) =
0

ln 2

tet dt = tet et 0 = 2 ln 2 1 = 0.3863

ln 2


ln 2
t2 et dt = t2 et 2tet + 2et 0 = 2 (ln 2)2 4ln2 + 4 2 = 2 (ln 2)2 4ln2 + 2
0


V ar (Y ) = E Y 2 (E (Y ))2 = 2 (ln 2)2 4 ln 2 + 2 4 (ln 2)2 4 ln 2 + 1

E Y


2

= 1 2 (ln 2)2 = 0.391

(c) (10 points) Dene T = 20


i=1 Xi where Xi are iid U nif (1, 2). Approximate the probability P (T 1000). Justify your answer!
Solution
We will use the identity s = eln s Notice that
Q

T =

20
Y

Q20

Xi = eln(

i=1

P20

Xi )

=e

i=1

ln(Xi )

P20

=e

i=1

Yi

i=1

where Yi = ln Xi . Therefore


P20

P (T 1000) = P e

i=1

Yi


1000 = P

20
X

!
Yi ln 1000

i=1

Since the Yi 's are iid random variables with nite variance we can use the CLT to
approximate their sum
20
X

Yi 20E (Y ) +

p
20V ar (Y )N (0, 1)

i=1

Therefore,
P (T 1000) P

ln 1000 20E (Y )
N (0, 1) p
20V ar (Y )

= P (N 0.92517) = 0.1775

IEOR E3658

Practice Final - Solution

Fall 2014

5. (10 points) A certain coin has a probability p of Heads. The coin is tossed successively and
independently until a Heads comes twice in a row or a Tails comes twice in a row. Find the
expected value of the number of tosses.
Solution
Let denote the number of tosses until a Heads comes twice in a row or a Tails comes twice
in a row. Let Ij denote the outcome of the j th toss. To compute we will condition on the
rst toss.
E () = E (|I1 = H) P (I1 = H) + E (|I1 = T ) P (I1 = T )
= pE (|I1 = H) + (1 p) E (|I1 = T )

(1)
(2)

We will handle separately the expressions = E (|I1 = H) and = E (|I1 = T ) and


condition further on the second toss.
= E (|I1 = H)
= E (|I1 = H, I2 = H) P (I2 = H|I1 = H) + E (|I1 = H, I2 = T ) P (I2 = T |I1 = H)
= 2 p + (E (|I1 = T ) + 1) (1 p)
= (1 p) + 1 + p
= E (|I1 = T )
= E (|I1 = T, I2 = T ) P (I2 = T |I1 = T ) + E (|I1 = T, I2 = H) P (I2 = H|I1 = T )
= 2 (1 p) + (E (|I1 = H) + 1) p
= p + 2 p

We obtain the system of linear equations:


(1 p) = 1 + p

(3)

p + = 2 p

(4)

Solving the system(3)-(4) gives


p2 2p + 3
p2 p + 1
p2 + 2
= 2
p p+1

Which leads to
E () = p + (1 p) =

p3 2p2 + 3p p2 + 2 p3 2p
p2 + p + 2
+
=
p2 p + 1
p2 p + 1
p2 p + 1

(5)

You might also like