You are on page 1of 56

Chapter #6 Functions of Random Variables

Question #2: Let ~(0,1). Use the CDF technique to find the PDF of the following
random variables: a) = 1/4 , b) = , c) = 1 , and d) = (1 ).

1 (0,1)
a) Since ~(0,1), we know that the density function is () = {
0
0 (, 0]
while the distribution function is () = { (0,1) . We can then use the
1 [1, )
CDF technique to find () = ( ) = (1/4 ) = ( 4 ) = ( 4 ), so

that () = () = ( 4 ) = ( 4 )4 3 = (1)4 3 . Since 0 < < 1, the
3
bounds are 0 < 4 < 1 or 0 < < 1. Therefore, () = {4 (0,1).
0

b) We have () = ( ) = ( ) = ( ()) = 1 (()),


1 1
so that () = () = [1 ( ())] = ( ()) ( ) = . Since

0 < < 1, the bounds are 0 < () < 1 or 1 < < 1. The probability density
1
( 1 , 1)
function of the random variable is therefore () = { .
0

c) By the CDF technique, we have that the () = ( ) = (1 ) =


( 1 ) = ( (1 )) = ( (1 )), so we have that () =
1 1
() = ( (1 )) = ( (1 )) ( 1) = 1. Since 0 < < 1, the

bounds are 0 < (1 ) < 1 or 0 < < 1 1 . Therefore, the probability


1
(0,1 1 )
density function of this random variable is () = { 1 .
0

d) We will use Theorem 6.3.2 for this question. Suppose that is a continuous random
variable with density () and assume that = () is a one-to-one transformation

with inverse = (). If () is continuous and nonzero, the density of is given

by () = (()) | ()|. Here, the transformation is = (1 ) = 2 +

and, based on the y-values of the graph over the x-values of (0,1), the range over
1
which the density of is defined is (0, 4). Since this transformation is not one-to-one,
1 1
we must partition the interval (0,1) into two parts: (0, 2) and (2 , 1). Then solve the

transformation by completing the square to obtain = 2 + = 2


1 1 1 1 2 1 1 1 1
= 2 + 4 = ( 2) 2 = 4 = 2 4 .
4 4
1
1 1 1 1 1
() 2
Since = () = 2 4 , we have = 2 (4 ) (1) = . Thus,
1
2
4

1 1 1 1 1
() = (2 + 4 ) | ()| + (2 4 ) | ()| = (1 + 1)| ()| = 1
,

4

1
1 1
2
so we can conclude that the density function is () = {(4 ) (0, 4).
0

Question #3: The measured radius of a circle has PDF () = 6(1 ) if (0,1) and
() = 0 otherwise. Find the distribution of a) the circumference and b) area of the circle.

a) The circumference of a circle is given by = 2, so by the CDF technique we have



that () = ( ) = (2 ) = ( 2) = (2). The density is thus
1 1 6(2)
() = () = (2) = (2) (2) = 6 (2) (1 2) (2) = = (2)3
.

Since 0 < < 1 we have 0 < 2 < 1 so that 0 < < 2. Therefore, the probability
6(2)
(0,2)
density function of the circumference is given by () = { (2)3 .
0

b) The area is given by = 2 , so we have () = ( ) = ( 2 ) =



( 2 ) = (|| ) = ( ) = () (). Thus,
3()
we have () = () = [ () ()] = = . Since we have
3/2

3()

0 < < 1, then 0 < < 1 so that 0 < < and () = { (0, ).
3/2
0

1
Question #10: Suppose has density () = 2 || for all . a) Find the density of the

random variable = ||. b) If = 0 when 0 and = 1 when > 0 find the CDF of .

a) We have () = ( ) = (|| < ) = ( ) = () (), so



that we obtain () = () = [ () ()] = ()(1) ()(1) =

1 1
|| + 2 || = || . Since the transformation was an absolute value function and
2

we have the bounds < < , the bounds become 0 < < . This allows us to
(0, )
write the probability density function of as () = { .
0

1 1
b) We see that ( = 0) = 2 and ( = 1) = 2 since () is symmetric. This allows us
0 (, 0)
1
to write the cumulative distribution function as () = { 2 [0,1) .
1 [1, )

1
Question #13: Suppose has density () = 24 2 for (2,4) and () = 0 otherwise.

Find the probability density function of the random variable = 2 .

We will use Theorem 6.3.2 for this question: Suppose that is a continuous random
variable with density () and assume that = () is a one-to-one transformation

with inverse = (). If () is continuous and nonzero, the density of is given

by () = (()) | ()|. Here, the transformation is = 2 and, based on the

y-values of the graph over the x-values of (0,1), the domain over which the density of
is defined is (0,16). Solving the transformation then gives that = () = so
1
that () = 2 . We must consider two cases in the interval (0,16): over (0,4) the

transformation is not one-to-one and over (4,16) it is one-to-one. The density is thus

()| ()| + ()| ()| (0,4) (0,4)
24
() = { ()| ()| [4,16) = { .
[4,16)
48
0 0

Question #16: Let 1 and 2 be independent random variables each having density function
1
() = 2 for [1, ) and () = 0 otherwise. a) Find the joint PDF of = 1 2 and

= 1. b) Find the marginal probability density function of the random variable .

a) Since 1 and 2 are independent, their joint density is simply the product of their
1 1 1
marginal densities, so 1 2 (1 , 2 ) = ( 2 ) ( 2 ) = ( 2 whenever we have that
1 2 1 2 )

(1 , 2 ) [1, ) [1, ) and zero otherwise. We will use Theorem 6.3.6, which says
that if = (1 , 2 ) and = (1 , 2 ) and we can solve uniquely for 1 and 2 , then
1 1

we have (, ) = 1 2 (1 (, ), 2 (, ))||, where = [ 2
]. Here, we
2


have that = 1 so 1 = and = 1 2 so 2 = = , so we can calculate the
1

0 1 1
Jacobian as = [ 1 ] = . We can therefore find the joint density as
2

1 1 1 1
(, ) = 1 2 (, ) | | = ( 2
) () = 2 if 1 < < < . We can find this
( )

region of integration by substituting into the constraints of 1 2 (1 , 2 ). The first is



that 1 < 1 < so 1 < < while the second is 1 < 2 < so 1 < < , which

reduces to < < . Combining these gives the required bounds 1 < < < .

1 1 ()
b) We have () = 1 (, ) = 1 = [2 ()] = if 1 < < .
2 1 2
Question #18: Let and have joint density function (, ) = for 0 < < <
and (, ) = 0 otherwise. a) Then find the joint density function of = + and = .
b) Find the marginal density function of . c) Find the marginal density function of .

a) We have that = so = and = + so = = , so we can calculate



0 1
the Jacobian as = [ ] = [ ] = 1. We can therefore find the joint
1 1


density as (, ) = (, )|1| = ( () )(1) = if 0 < < 2 < . We

can find these bounds by substituting into the bounds 0 < < < with our solved

transformations, which gives 0 < < < or 0 < 2 < < or 0 < < 2 < .

/2 /2 /2 /2
b) We have that () = 0 (, ) = 0 = 0 = [ ]0 =
/2
[ ]0 = ( /2 1) = if 0 < < .


c) We have that () = 2 (, ) = 2 = 2 = [ ]
2 =

[ ]
2 = (0
2
) = if 0 < < . Note that we have omitted the steps
where the infinite limit of integration is replaced by a parameter and a limit to infinity
with that parameter is evaluated to show that it goes to zero.

Question #21: Let and have joint density (, ) = 2( + ) for 0 < < < 1 and
(, ) = 0 otherwise. a) Find the joint probability density function of = and = .
b) Find the marginal probability density function of the random variable .


a) We have that = so = and = so = = , so we can calculate the Jacobian


1 0 1
as = [
] = [ 1 ] = . The joint probability density function is thus



1 1
(, ) = (, ) | | = 2 ( + ) ( ) = 2 (1 + 2 ). The region is then found by

substituting to 0 < < < 1, so we have 0 < < < 1 or 0 < 2 < < < 1. This

can be visualized as the region between = and = 2 on the plane.


b) The marginal probability density function is given by () = (, ) =

2
2 (1 + 2 ) = [2 ] = (2 2) (2 2) = 2 2 if 0 < < 1.

Question #25: Let 1 , 2 , 3,4 be independent random variables. Assume that 2, 3, 4 are
each distributed Poisson with parameter 5 and the random variable = 1 + 2 + 3 + 4
is distributed Poisson with parameter 25. a) What is the distribution of the random variable
1? b) What is the distribution of the random variable = 1 + 2?

a) We first note that while 1,2, 3, 4 are independent, they are not since only
2 , 3 , 4 ~(5) with 1 not being listed. Thus, we must use the unsimplified
formula 6.4.4, which says if 1 , , are independent random variables with moment
generating functions () and = 1 + + , then the moment generating
function of is () = [1 ()] [ ()]. We use the fact that if some ~(),
1)
then () = ( to solve this problem. If we let = 1 + 2 + 3 + 4 , we have
1)
that () = [1 ()][2 ()][3 ()][4 ()] = 25( since ~(25).
1) 1) 1) 1)
Substituting gives [1 ()][ 5( ][ 5( ][ 5( ] = 25( , which reduces to

1) 3 1) 25( 1) 25( 1) 1)
[1 ()][ 5( ] = 5( so that [1 ()] = 3 = = 10( . This

[ 5( 1) ] 15( 1)

is the moment generating function of a poisson 10 random variable, so 1 ~(10).

1) 1) 1)
b) We have () = [1 ()][2 ()] = [ 10( ][ 5( ] = 15( , which is the
moment generating function of a poisson 15 random variable, so ~(15). We can
see a general pattern here; if ~() for = 1, , are independent random
variables and we define some = =1 for , then we have that ~().
Chapter #6 Functions of Random Variables

Question #17: Suppose that 1 and 2 denote a random sample of size 2 from a gamma
1
distribution such that ~ (2, 2). Find the PDF of a) = 1 + 2 and b) = 1.
2


1
a) We know that if some ~(, ), then () = () 1 if > 0. Since 1

and 2 are independent, their joint density function is given by 1 2 (1 , 2 ) =


1 1 2 (1 +2 )
1 1 1 1 1 1
1 (1 )2 (2 ) = ( 1 1 2 2 )( 1 2 2 ) = 2
2
2 if 1 , 2 > 0,
2(2) 2(2) 1 2

1
since (2) = . We have the transformation = 1 + 2 and generate another

transformation = 1 , so we have 1 = and 2 = 2 , which allows us to find


1 0
= [ ] = 2. Then the joint density of and is given by (, ) =
1 2
2
1 1 1
1 2 (, 2 )|| = 2 if > 0, 2 > 0; these bounds can be
2

combined to give 0 < < 2 . Finally, the density of is () = (, ) =
2 2
1 2 1 1
2 2 = = 2 if > 0 and zero otherwise. The evaluation of
0

this integral has been omitted, but can be computed by two substitutions.


b) We have = 1 and generate = 1 so that 1 = and 2 = , which allows us to
2

1 0
calculate = det [ 1 (, ||
2 ] = 2 . Then we have ) = 1 2 (, ) =


+/ (+/)
1 1 1 1 1
2 = 2 3/2 2 if , > 0, so that the density of is given by
2 / 2
(+/)
1 1 1 1
() = (, ) = 2 0 2 = = (+1) if > 0. The
3/2

evaluation of this integral has been omitted, but can be computed by substitution.
Question #26: Let 1 and 2 be independent negative binomial random variables such that
1 ~(1 , ) and 2 ~(2 , ). a) Find the MGF and distribution of = 1 + 2.

We use Theorem 6.4.3, which says that if the random variables are independent
with respective MGFs (), then the MGF of the random variable that is their sum
is simply the product of their respective MGFs. Also, if some discrete random variable


~(, ), then () = (1 ) . Therefore, the moment generating function of
1 2 1 +2

is () = [1 ()][2 ()] = ( ) ( ) =( ) . This then allows to
1 1 1

determine the distribution of = 1 + 2, namely that ~(1 + 2 , ).

Question #27: Recall that ~(, 2 ) if () ~(, 2 ). Assume that ~( , 2 )


for = 1, , are independent. Find the distribution functions of the following random

variables a) = =1 , b) = =1 , c) = 1, d) find () = (=1 ).
2

a) We have that () = (=1 ) = =1 ( ) = (1 ) + + ( ), so the


random variable () is the sum of normally distributed random variables. This
implies that () ~(=1 , =1 2 ), which means ~(=1 , =1 2 ).

b) The random variable () = (=1 ) = =1 ( ) = =1 ( ) =


1 (1 ) + + 2 ( ). We use that if some ~(, 2 ), then ~(, 2 2 ) to
conclude that () ~(=1 , =1 2 2 ) so ~(=1 , =1 2 2 ).


c) We have that () = (1 ) = (1 ) (2 ), so the random variable () is the
2

sum of two normally distributed random variables. Thus, () ~(1 2 , 12 + 22 )



which implies that the distribution of = 1 is ~(1 2 , 12 + 22 ).
2
2 2 /2
d) For ~(, 2 ), we have () = ( ) = + and for ~(0,1), we have
2 /2
() = ( ) = . Thus, the expected value is given by ( ) = ( + ) =
2
+ /2. Since the random variables are all independent, we therefore have that
2
() = (=1 ) = =1 ( ) = =1( + /2 ) = {=1 + =1 2 /2}.

Question #28: Let 1 and 2 be a random sample of size 2 from a continuous distribution
with PDF of the form () = 2 if 0 < < 1 and zero otherwise. a) Find the marginal
densities of 1 and 2 , the smallest and largest order statistics, b) find the joint probability
density function of 1 and 2 , and c) find the density of the sample range = 2 1 .

0 (, 0]
2 (0,1)
a) Since () = { , we know that () = { 2 (0,1) . Then
0
1 [1, )
from Theorem 6.5.2, we have that 1 (1 ) = (1 )[1 (1 )]1 so we can
calculate the smallest order statistic as 1 (1 ) = 2[21 ][1 12 ]21 = 41 413
whenever 1 (0,1). Similarly, ( ) = ( )[ ( )]1 so we can calculate the
largest order statistic as 2 (2 ) = 2[22 ][22 ]21 = 423 whenever 2 (0,1).

b) From Theorem 6.5.1, the joint probability density function of the order statistics is
(1 , , ) = ! (1 ) ( ). In this question, we have that 1 2 (1 , 2 ) =
2! (1 ) (2 ) = 2(21 )(22 ) = 81 2 whenever we have 0 < 1 < 2 < 1.

c) We first find the joint density of the smallest and largest order statistics in order to
make a transformation to get the marginal density of the sample range. From the
work we did above, we have that 1 2 (1 , 2 ) = 81 2 . We have the transformation
= 2 1 and generate = 1 , so we have 1 = and 2 = + which allows us to
1 0
calculate = [ ] = 1. The joint density of and is therefore (, ) =
1 1
1 2 (, + )|| = 8( + )(1) = 8 2 + 8 if 0 < < + < 1, which can also be

written as 0 < < 1 . The marginal density is thus () = (, ) =
1 8 1 8
0 (8 2 + 8) = [3 3 + 4 2 ] = 3 (1 )3 + 4(1 )2 if 0 < < 1.
0

Question #31: Consider a random sample of size from an exponential distribution such
that ~(1). Give the density of a) the smallest order statistic denoted by 1 , b) the
largest order statistic denoted by , c) the sample range of the order statistics = 1 .

(0, ) 0 (, 0]
a) Since ( ) = { , we have ( ) = { .
0 1
Then we have that 1 (1 ) = (1 )[1 (1 )]1 = 1 [1 (1 1 )]1 =
1 ( 1 )1 = 1 if 1 > 0 and zero otherwise.

b) Similarly, we have ( ) = ( )[ ( )]1 = [1 ]1 for > 0.

c) Since the exponential distribution has the memoryless property, the difference of
= 1 will not be conditional on the value of 1 . This allows us to treat 1 = 0,
so that = 1 = 0 = . We then use the fact that the range of a set of
order statistics from an exponential distribution is the same as the largest order
statistic from a set of 1 order statistics. From above, we have that ( ) =
[1 ]1 , so substituting 1 gives () = ( 1)[1 ]2 .

Question #32: A system is composed of five independent components connected in series


one after the other. a) If the PDF of the time to failure of each component is ~(1), then
give the PDF of the time to failure of the system; b) if the components are connected in
parallel so that all must fail before the system fails, give the PDF of the time to failure.

a) Since ~(1), we know that the density is given by ( ) = for > 0 and
0 (, 0]
( ) = { . The system in the series fails whenever the
1
earliest component fails, which happens at time (1) = 1 , the first order statistic.
Thus, the probability density function of the time to failure is therefore given by
1 (1 ) = (1 )[1 (1 )]1 = 5 1 [ 1 ]4 = 5 51 whenever 1 > 0.

b) For the system in parallel, the system in the series fails whenever the last component
fails, which happens at time (5) = 5, the greatest order statistic. Thus, the density is
5 (5 ) = ( )[ ( )]1 = 5 5 [1 5 ]4 whenever 5 > 0.

Question #33: Consider a random sample of size from a geometric distribution such that
~(). Give the CDF of a) the minimum 1 , b) the smallest , c) the maximum .

a) If some ~(), then () = (1 ) and () = 1 (1 )+1. Consider


the event (1) > , which happens if and only if () > for all = 1, , . Therefore,

we have ((1) > ) = (() > ) = [1 ()] = [(1 ) ] = (1 ) ,
which implies that the CDF is given by (1) () = ((1) ) = 1 (1 ) .

b) The event () happens when of the () satisfy () and the other



satisfy () > . Thus, we have (() ) = ()(() ) (() > ) =

()[1 (1 ) ] [(1 ) ] , which is the distribution function of .

c) The event () happens when () for all = 1, , . Thus, (() ) =


1(1)
(() ) = (=1[(1 )1 ]) = ( ) = [1 (1 ) ] .
1(1)
Chapter #7 Limiting Distributions


Question #30: If ~(, ), then () = +1
if > 0 and zero otherwise. Consider
(1+ )

a random sample of size = 5 from a Pareto distribution where ~(1,2); that is,
suppose that 1 , , 5 are drawn from the given Pareto distribution above. a) Find the joint
PDF of the second and fourth order statistics given by 2 = (2) and 4 = (4) , and b) find the
joint PDF of the first three order statistics given by 1 = (2) , 2 = (2) and 3 = (2).

2
a) The CDF of the population is given by () = 0 () = 0 [(1+)3 ] , so that we

can calculate the joint density using Corollary 6.5.1 as 2 4 (2 , 4 ) =


5!
[ ( )]21 (2 )[ (4 ) (2 )]421 (4 )[1 (4 )]54 =
(21)!(421)!(54)! 2

5! (2 ) (4 )[ (2 )][ (4 ) (2 )][1 (4 )] if 0 < 2 < 4 < .

!
b) From Theorem 6.5.4, we have (1 , , ) = ()! [1 ( )] [ (1 ) ( )],

so we may calculate that 1 2 3 (1 , 2 , 3 ) = 60[1 (3 )]2 [ (1 ) (2 ) (3 )]


! 5!
whenever 0 < 1 < 2 < 3 < , since we have that ()! = 2! = 60.

Question #1: Consider a random sample of size from a distribution with cumulative
1
distribution function () = 1 whenever 1 < and zero otherwise. That is, let the

random variables 1 , , be ~ from the distribution with CDF (). a) Derive the CDF
of the smallest order statistic given by (1) = 1: , b) find the limiting distribution of 1: ;
that is, if () denotes the order statistic from above, find lim (), c) find the limiting


distribution of 1: ; that is, find the CDF of (1) and its limit as .

a) We can compute that 1: () = (1: ) = 1 (1: > ) = 1 ( > ) =


1 1
1 [1 (1 )] = 1 () whenever 1 < . We thus have that the CDF of
1 (1/) 1
the smallest order statistic is 1: () = { . Finally, we note
0 < 1
that (1: > ) ( > ) since the smallest order statistic is greater than some
if and only if all of the independent samples are also greater than . We can use
this approach for any order statistic, including the largest, by changing the exponent.

lim [1 (1/) ] 1 0 1

b) We have that lim () = { ={ = (),
lim 0 < 1 1 > 1

so the limiting distribution of 1: is degenerate at = 1. From Definition 7.2.2, this


means that () is the cumulative distribution function of some discrete distribution
() that assigns probability one at = 1 and zero otherwise.

1 1

() = (1: ) = (1: ) = 1 (1: > ) =
c) As before, we have 1:

1
1 1
1 ( > ) = 1 [1 (1
1 )] = 1 () whenever 1. Therefore, it is

clear that the limiting distribution of this sequence of random variables is given by
1 1/ 1
() = {
lim 1: = () since there is no dependence on .
0 < 1

Question #2: Consider a random sample of size from a distribution with CDF given by
1
() = 1+ for all . Find the limiting distribution of a) : and b) : ().

1
a) We have : () = (: ) = ( ) = (1+ ) for all . Since
1
lim [(1+ ) ] = 0, we conclude that : does not have a limiting distribution.

b) We calculate that :() () = (: () ) = (: + ()) =

1 1 1
: ( + ()) = (1+ (+ ()) ) = (1+ ()) = (
) . Evaluating this
1+


1
limit gives lim [: ( + ())] = lim [(
) ] = for all .
1+

Question #3: Consider a random sample of size from the distribution () = 1 2 if


1
> 1 and zero otherwise. Find the limiting distribution of a) 1: , b) : and c) : .

a) We can compute that 1: () = (1: ) = 1 (1: > ) = 1 ( > ) =


1
1 1 1 2 > 1
1 [1 (1 2 )] = 1 ( 2) if > 1. Thus, 1: () = { so
0 1
1
lim [1 2] > 1 1 > 1
the limiting distribution is lim 1: () = { ={ .
lim [0] 1 0 1

We therefore say that the limiting distribution is degenerate at = 1.

1
b) We have : () = (: ) = ( ) = (1 2 ) whenever > 1. Thus,

1
1
(1 2 ) > 1 lim [(1 2 ) ] > 1
: () = { so lim : () = { = 0.
0 1 lim [0] 1

We would therefore conclude that there is no limiting distribution for : .

1
c) We compute that 1 () = ( : ) = (: ) = : () =

:

1 1 1
(1 ( ) = (1 2 ) whenever > 1 or > . We can therefore compute
)2

1 1 1
lim [(1 2 ) ] > 2

= { > 0 .

the limit as lim 1 () = { 1
:
0 0
lim [0]

Question #5: Suppose that ~(0,1) and that the are all independent. Use moment
1

=1( + )
generating functions to find the limiting distribution of = as .

1 1 1

=1( + ) (
=1 )+(=1( )) (
=1 )+
=1 1
We have =
=
=
= + , so the MGF is

1

() = [1 ()] [2 ()] = [1 ()] [ ( )] since is the sum of two parts so

we can multiply their respective MGFs. The MGF of a standard normal random
2 /2
variable with = 0 and 2 = 1 is given by () = , which allows us to calculate
2 2 1
( ) /2
that 1 () = = . Also, we have that ( ) = so combining these
2

1 2
2

gives () = [1 ()] [ ( )] = [ ] [ ] = [ 2 ] [ ]. Then we can use
2

2 2
Theorem 7.3.1 to calculate lim () = lim [ 2 ] [ ] = 2 = (), which we

know is the MGF of a standard normal, so the limiting distribution is ~(0,1). Note
that this is also a direct consequence of the Central Limit Theorem.

Question #9: Let 1 , 2 , , 100 be a random sample of size = 100 from an exponential
distribution such that each ~(1) and let = 1 + 2 + + 100. a) Give a normal

approximation for the probability ( > 110), and b) if = 100 is the sample mean, then

give a normal approximation to the probability (1.1 < < 1.2).

a) Since each ~(1), we know that ( ) = = 1 while ( ) = 2 = 1. Due to


the independence of the s, we have that () = (100 100
=1 ) = =1 ( ) = 100 and

() = (100 100
=1 ) = =1 ( ) = 100 so () = 100 = 10. We can

therefore calculate that ( > 110) = 1 ( 110) = 1 (100


=1 110) =
100
=1 100 110100
1( ) 1 ( 1) = 1 (1) = 1 0.8413 = 0.1587,
10 10

where denotes the standard normal distribution with = 0 and 2 = 1.



b) We know that = / (0,1) by the Central Limit Theorem. We then have that

1 1 1
() = (100) = 100 () = 1 and () = (100) = 10,000 () = 100 so
1 1.11 1 1.21
() = 10, which allows us to find (1.1 < < 1.2) = ( 1/10 < 1/10 < )
1/10

(1 < < 2) = (2) (1) = 0.9772 0.8413 = 0.1359. Here, we have used the
fact that = 1 and = 1 which come from the population distribution ~(1).

Question #11: Let ~(0,1) where 1 , 2 , , 20 are all independent. Find a normal
approximation for the probability (20
=1 12).

Since each ~(0,1), we know that ( ) = 1/2 while ( ) = 1/12. Due to


the independence of the s, we have that (20 20
=1 ) = =1 ( ) = 10 and

(20 20 20
=1 ) = =1 ( ) = 5/3, so that (=1 ) = 5/3. This allows us to

20
=1 10 1210
find (20
=1 12) = ( ) ( 1.55) = (1.55) = 0.9394.
5/3 5/3
Chapter #8 Statistics and Sampling Distributions

Question #1: Let denote the weight in pounds of a single bag of feed where ~(101,4).
What is the probability that 20 bags will weigh at least 2,000 pounds?

Let = 20 20 20
=1 where ~(101,4). We have that () = (=1 ) = =1 ( ) =

20(101) = 2,020 and () = (20 20


=1 ) = =1 ( ) = 20(4) = 80 such

that () = 80 = 45. We can thus calculate the probability ( 2,000) =


20
=1 () 2,000() 20
=1 2,020 2,0002,020
(20
=1 2,000) = ( ) = ( )
() () 45 45

20
( 45) = ( 2.24) = 1 (2.24) = 0.987, where ~(0,1).

Question #2: Let denote the diameter of a shaft and the diameter of a bearing, where
both and are independent and ~(1,0.0004) and ~(1.01,0.0009). a) If a shaft and
bearing are selected at random, what is the probability that the shaft diameter will exceed
the bearing diameter? b) Now assume equal variances (2 = 2 = 2 ) such that we have
~(1, 2 ) and ~(1.01, 2 ). Find the value of that will yield a probability of
noninterference of 0.95 (which means the shaft diameter exceeds the bearing diameter).

a) Define = , since we wish to find ( > ) = ( > 0) = ( > 0). We


have that () = ( ) = () () = 1 1.01 = 0.01 and () =
( ) = () + () = 0.0004 + 0.0009 = 0.0013 such that () =
0.0013 = 0.036. Thus, we have ( > ) = ( > 0) = ( > 0) =
() 0() +0.01 0.01
( > ) = ( 0.036 > 0.036) ( > 0.28) = 1 (0.28) = 0.39.
() ()

b) For = , we have that () = 0.01 but () = 2 2 so () = 2. We


0.01 0.01
wish to find so that ( > 0) = 0.95 1 ( ) = 0.95 ( ) = 0.05.
2 2

Since only the critical value = 1.645 ensures that (1.645) = 0.05, we must
0.01
solve = 1.645 = 0.004. But since we must have 0, no such exists.
2
Question #3: Let 1 , , be a random sample of size where they are ~ such that
~(, 2 ) and define = =1 and = =1 2 . a) Find a statistic that is a function of
and and unbiased for the parameter = 2 5 2 . b) Find a statistic that is unbiased
for = 2 + 2 . c) If is a constant and = 1 if and zero otherwise, find a statistic
2
()/ 1
that is a function of 1 , , and is unbiased for () = ( )= 2 .
2

1 1 1
a) We first find an estimator for = () = ( =1 ) = (=1 ) = () =
1 1
and then for 2 = ( 2 ) = ( =1( )2 ) = ( [=1(2 ) 2 ]) =
1 1

1 1 1 1 2
(=1(2 ) 2 ) = 1 (=1 2 2 ) = 1 [ ( =1 ) ] =
1
2 2
1 (
=1 ) 1 (
=1 ) 1 1 1 2
[ ] = 1 [ ] = 1 [ 2 ] = 1 [ ].
1 2

1 2 2 5 1
We thus have = 2 5 2 = 2 [ ] 5 [1 ( )] = 1 ( 2 ),

which is an unbiased estimator of since () = = .

1 1 2
b) Since we found that = () = ( =1 ), then we have 2 = [ ( =1 )] =

1 2 1 1 1 1
[( =1 ) ( =1 )] = [2 (=1 )2 2 (=1 )] = [2 2

1 2 2 2 2 1 2
2 ] = [ 2 ]= . We previously found that 2 = 1 [ ], so
2 2
2 2 (1) 2
combining these we find that = 2 + 2 = 2 + 2 = 2 + 2 =

(1) 1 2 2 1 2 2 2 2
[1 ( )] + 2 = ( ) + 2 = 2 + 2 = , which is an

unbiased estimator of since () = = .


c) We have ( = 1) = ( ) = ( ) = ( ) = ( ) = ()


and ( ) = 1 ( = 1) + 0 ( = 0) = ( = 1) = ( ) = (). Then,

1 1 1 1
() = ( =1 ) = (=1 ) = =1 ( ) = ( ) = ( ) = (),

which means that is an unbiased estimator of () = (( )/).


Question #4: Assume that 1 and 2 are independent normal random variables such that
each ~(, 2 ) and define 1 = 1 + 2 and 2 = 1 2. Show that the random variables
1 and 2 are independent and normally distributed.

Since 1 and 2 are independent normal random variables, we know that their joint
( )2 ( )2
1 1 2 1 2
density function is 1 2 (1 , 2 ) = 1 (1 )2 (2 ) = [ 2 ][ 22 ] =
2 2

1
1 [(1 )2 +(2 )2 ]
22 . We have the transformation 1 = 1 + 2 and 2 = 1 2 ,
22
1 +2 1 2
which can be solved to obtain 1 = and 2 = . This allows us to calculate
2 2

1/2 1/2 1
the Jacobian = [ ] = 2, so we can compute the joint density
1/2 1/2
1 + 2 2
1 +2 1 2 1 1 (( 1 2 ) +( 1 2 ) )
1 2 (1 , 2 ) = 1 2 ( , ) || = 2 [22 22 2 2 ]. After
2 2

1 1
1 [ 2]2 [ ]2
simplifying this expression, we have 1 2 (1 , 2 ) = 42 42 1 42 2 . Since

the marginal densities can be separated, this shows that 1 and 2 are independent
and normally distributed. Moreover, we see that 1 ~(2, 2 2 ) and 2 ~(0,2 2 ).

Question #12: The distance in feet by which a parachutist misses a target is = 12 + 22


where 1 and 2 are independent with each ~(0,25). Find the probability ( 12.25).

We wish to find ( 12.25) = (12 + 22 12.25) = [12 + 22 (12.25)2 ] =

[(1 0)2 + (2 0)2 (12.25)2 ] = [(1 )2 + (2 )2 (12.25)2 ] =


(1 )2 (2 )2 (12.25)2 ( )2 (12.25)2 ( 0)2 (12.25)2
[ + ] = [2=1 ] = [2=1 ].
2 2 2 2 2 2 2
( 0)2 (12.25)2 (12.25)2
Since = 0 and 2 = 25, we have [2=1 ] [ 2 (2) ]
25 25 25

[ 2 (2) 6] = 0.95. Note that we have used Corollary 8.3.4 to transform the
2
question into one using the chi-square distribution, since =1 ( ) ~ 2 (). This is

2
because ~(, 2 ) implies that
~(0,1) so (
) ~ 2 (1) and that the sum of

independent chi-square distributed random variables is distributed 2 ().


Chapter #8 Statistics and Sampling Distributions

Question #8: Suppose that and are independent and distributed ~ 2 () and ~ 2 ().
Is the random variable = distributed chi-square if we have > ?

No. The random variable = can clearly take on negative values, whereas as a
random variable following the chi-square distribution must be positive.

Question #9: Suppose that ~ 2 (), = + ~ 2 ( + ) and that and are


independent random variables. Use moment generating functions to show that ~ 2 ().

We know that if some ~ 2 (), then its MGF is given by () = (1 2)/2. We


thus have () = (1 2)/2 and () = + () = (1 2)(+)/2 . Since
and are independent, we know that + () = () (), which implies that
+ () (12)(+)/2
() = (+) () = () = = (12)/2
= (1 2)/2. Thus, we
()

have that = is distributed chi-square with degrees of freedom.

Question #14: If ~(), find the distribution of the random variable 2 .

We know that if ~(0,1) and ~ 2 () are independent random variables, then the

distribution of = is Students t distribution. But then we can square this to
/

2 2 /1
produce 2 = / = , which makes it clear that 2 ~(1, ). The reason for this is
/

that we know if some ~(0,1), then 2 ~ 2 (1). Moreover, we are already given that
~ 2 (). Combining these results with the fact that if some 1 ~ 2 (1 ) and 2 ~ 2 (2 )
/
are independent, then the random variable = 1 /1 ~(1 , 2 ). Therefore, 2 follows
2 2

the F distribution with 1 and degrees of freedom whenever ~().


Question #15: Suppose that ~(, 2 ) for = 1, , and ~(0,1) for = 1, . . , and
that all variables are independent. Find the distribution of the following random variables.

a) 1 2 ~( , 2 + 2 ) (0,2 2 )

b) 2 + 23 ~( + 2, 2 + 4 2 ) (3, 5 2 )

c) 12 ~ 2 (1) since the square of a standard normal random variable is chi-square.

1 2 1 2
d) 2
~( 1) since 1 2 ~(0,2 2 ) implies that 2
~(0,1) and dividing this

by the sample standard deviation of the Z sample makes it clear that ~( 1).

( ) (1)2
e) ~( 1) since = / ~(0,1), = ~ 2 ( 1) and we can write
2

( )
= = ~( 1) by the definition of the t distribution (see above).
/(1)

f) 12 + 22 = 2 (1) + 2 (1)~ 2 (1 + 1) 2 (2) since we can simply add the


parameters for a sum of independent chi-square random variables.

g) 12 22 the distribution is unknown.

1 1 1 1
h) ~(1) since = 22 ~ 2 (1) and we can write = = ~(1).
22 22 22 /1 /1

12 12 1 /1
i) ~(1,1) since 1 = 12 ~ 2 (1), 2 = 22 ~ 2 (1) and we have = = ~(1,1).
22 22 2 /1

1
j) ~(1,0) since we can generate the joint transformation = 1 and = 2 ,
2 2

1
calculate the joint density (, ) and integrate out to find () = (2 +1).

k) the distribution is unknown.

( )
l) ~() since = / ~(0,1) and = =1 2 ~ 2 () and we can write the

2
=1

)
(
( )
expression =
= ~() by the definition of the distribution.
/
2
=1

=1
2

( )2 ( )2
m) =1 + =1( )2 ~ 2 ( + 1) since =1 ~ 2 () by Corollary
2 2
(1) 2
8.3.4 and =1( )2 = ( 1)2 = 12 ~ 2 ( 1) by Theorem 8.3.6. Thus,

we have the sum of two chi-square random variables so we sum the parameters.

1 1 1 2
n) + =1 ~ (2 , 2 + ) since ~ (, ) implies that the random variable
2

1 1 1 1 21
= =1 (2 ) ~ (=1 (2 ) , =1 (2 ) 2 ) (2 , 2 ). Also, we have
2

1 1
=1 = ~ (0, ) so the distribution of their sum is normal and we sum their

1 1 1
respective means and variances to conclude that 2 + =1 ~ (2 , 2 + ).

2
o) 2 ~ 2 (1) since ~(0,1), so it must be that () = 2 ~ 2 (1).

(1) 2
=1( )
p)
(1)2 =1( )2
~( 1, 1) since we can simplify the random variable as

12 2
=1( )
(1) 2
=1( ) 12 2

(1) =1( )2
2
= 1
2
= 2 2 and 12 2 ~ 2 ( 1) and 2 2 ~ 2 ( 1). We
2
=1( )
1

thus have the ratio of two chi-square random variables over their respective degrees
of freedom, which we know follows the F distribution.
Question #18: Assume that ~(0,1), 1 ~ 2 (5) and 2 ~ 2 (9) are all independent. Then

compute the probability that a) (1 + 2 < 8.6), b) ( < 2.015), c) ( > 0.6112 ),
1 /5


d) (1 < 1.45) and e) find the value of such that ( +
1
< ) = 0.9.
2 1 2

a) Since 1 ~ 2 (5) and 2 ~ 2 (9), we know that 1 + 2 ~ 2 (14). This allows us to


compute (1 + 2 < 8.6) = 0.144 using the tables for the chi-square distribution.

b) We know that if ~(0,1) and some ~ 2 () are independent random variables,



then = follows the t distribution with degrees of freedom. We thus have that
/


= ~(5), so we can compute ( < 2.015) = 0.95 using the t-table.
1 /5 1 /5


c) We wish to compute ( > 0.6112 ) = ( > 0.611) = ( 3 > 0.611(3)) =
2 2


( > 1.833) = 0.05, from using the t-table since we know that ~(9).
2 /9 2 /9

/5 9 /5
d) We wish to compute (1 < 1.45) = (1 /9 < 1.45 (5)) = (1 /9 < 2.61). We know
2 2 2

that if some 1 ~ 2 (1 ) and 2 ~ 2 (2 ) are independent, then the random variable


/ /5
given by = 1 /1 ~(1 , 2 ). We thus have that 1 /9 ~(5,9), so we can therefore use
2 2 2

/5
the F-table to compute the desired probability as (1 /9 < 2.61) = 0.9.
2

1 1 +2 1 1
e) We wish to compute sich that ( + < ) = ( > ) = (1 + 2 > ) =
1 2 1 1

1 /9 5 1 /9
(2 > 1) = (2/5 > 9 ( 1)) = 0.9. But we know that = 2/5 ~(9,5) so we
1 1 1

can use tables to find that ( > 0.383) = 0.9. This means that we must solve the
5 1 1 9 1
equation 9 ( 1) = 0.383 = 5 (0.383) + 1 = 9 = 0.592.
(0.383)+1
5
1 1
Question #19: Suppose that ~(1). a) Show that the CDF of is () = 2 + ()
1
and b) show that the 100 percentile is given by (1) = [ ( 2)].

+1 +1
( )
2 2
a) If some ~(), then its density is given by () =
2
(1 + ) . When = 1,
( )
2

(1) 1 1 1
we have () = 1 (1 + 2 )1 = since (1) = 1 and (2) = . We thus
( ) 1+ 2
2

1 1
have that () = 1+ 2 when = 1, which is the density of a Cauchy random variable.
1 1
To find the cumulative distribution, we simply compute () = 1+ 2 =
1 1 1 1
[()] = (() ( )) = () + .
2 2

b) The 100 percentile is the value of such that () = . From the work above,
1 1 1 1
we have () + 2 = () = ( 2) = [( 2) ]. This

1
proves that the 100 percentile is given by (1) = [( 2) ].
Chapter #8 Statistics and Sampling Distributions

Question #22: Compute ( ) for > 0 if we have that ~(, ).

(+)
Since ~(, ), its PDF is () = ()() 1 (1 )1 whenever 0 < < 1

and > 0, > 0. Then using the definition of expected value, we can compute
(+) 1 (+) (+)()
( ) = () = ()() 0 1 (1 )1 = ()() =
(++)
(+)(+) (+) 1 1
since we have (1 )1 = 1 so we can solve for the
()(++) ()() 0

()() 1
integral to conclude that = 0 1 (1 )1 . In this case, we are solving
(+)
1 1 (+)()
that 0 1 (1 )1 = 0 +1 (1 )1 = . Therefore, all of
(++)

the moments of the beta distribution for some fixed > 0 can be written in terms of
the gamma function, which can be evaluated numerically.

Question #24: Suppose that ~ 2 (). Use moment generating functions to find the limiting

distribution of the transformed random variable as .
2

This result follows directly from the Central Limit Theorem. If we let = =1
where ~2 (1) for = 1, , , then ~ 2 () so that ( ) = , ( ) = 2 and
( )
( ) = 2. Therefore, = ~(0,1) as . We will now prove
( ) 2

this result using moment generating functions. By the definition of MGFs, we have
2 2
( )
[ ] () = [ 2 ] = [ 2
2 ]= [ 2 ]= ( )=
2
2




2
2
2
2
2
2
(1 ) = (1 ) . In order to evaluate lim [ ] (), we first
2 2

take logarithms and then exponentiate the result. This implies that ln [M[Y ] (t)] =
2

2 2
2 2 2 2 2 2
ln [ (1 ) ] = ln [e ] + ln [(1 ) ] = 2 ln (1 ). From

2 3 2
here, we use the Taylor series ln(1 ) = for = to evaluate
2 3

2 2
the limit, which then gives lim ln [M[Y ] (t)] = lim [ 2 ln (1 )]
2
3
2 2 2 3 22 2 2 2 3 2
lim [ ( 3 )] = lim [ + + + +] =
2 2 3
3 2

2 3 2 2
lim [ 2 + +] = + 0 + . This result therefore implies that the limit
3 2

lim ln[M Y (t)] 2


[ ]
lim [ ] () = 2 = 2 , which is the moment generating function of
2

a random variable that follows a standard normal distribution. This proves that the

random variable ~(0,1) as , just as is guaranteed by the CLT.
2
Chapter #9 Point Estimation

Question #1: Assume that 1 , , are independent and identically distributed with
common density (; ), where > 0 is an unknown parameter. Find the method of
moments estimator (MME) of if the density function is a) (; ) = 1 for 0 < < 1,
b) (; ) = ( + 1) 2 whenever > 1, and c) (; ) = 2 whenever > 0.

1
a) We begin by computing the first population moment, so () = 0 (; ) =
1 1 1
0 ( 1 ) = 0 = +1 [ +1 ]0 = +1 (1 0) = +1. We therefore have

() = +1. Next, we equate the first population moment with the first sample
1
moment, which gives 1 = 1 +1 = =1 +1 = . Finally, we replace

by and solve the equation +1 = for , which implies that = 1.


b) Just as above, we first compute () = 1 (; ) = 1 [( + 1) 2 ] =
+1 +1 +1 +1
( + 1) 1 1 = [ ]1 = [0 1] = . Thus, we have () =

+1 1 +1 1
which means that 1 = 1 = =1 = and = 1.

2
c) We have () = 0 (; ) = 0 [ 2 ] = 2 0 2 = =

after doing integration by parts. We can also find this directly by noting that the
1
density (, ) = 2 suggests that ~ ( , 2). This then implies that
1 2 2 1 2
() = = 2 = . We therefore set 1 = 1 such that = =1 or = , and
2
then solve for the method of moments estimator, which is given by = .

Question #2: Assume that 1 , , are independent and identically distributed. Find the
method of moments estimator (MME) of the unknown parameters if the random sample
1
comes from a) ~(3, ), b) ~(2, ), c) ~ (, 2), and d) ~(, ).

3
a) Since ~(3, ), we know that () = = . Equating this with the first sample
3 3
moment gives 1 = 1 = , so the estimator is = .

b) Since ~(2, ), we know that () = = 2. Equating this with the first



sample moment gives 1 = 1 2 = , so the estimator is = 2 .

1 1 1
c) Since ~ (, 2), we know that () = (1 + ) = (1 + 1/2) = (3) =

(3 1)! = 2. Thus, we have 1 = 1 2 = , so the estimator is = 2 .

2 2
d) Since ~(, ), we have 1 = 1 and 2 = 2 + 12 = (2)(1)2 + (1)2. This
2 2 1
means that 1 = 1 = 1 = and 2 = (2)(1)2 + (1)2 = 2 = =1 2 . We

must solve for the unknown parameters and in terms of the two sample moments
1
and =1 2 . From the first equation, we can solve to find = ( 1) and

2 (1)2 2 (1)2 1
substitute into the second equation to find (2)(1)2
+ (1)2
= =1 2
2
2 1 1 22
(2)
+ 2 = =1 2 2 (2 + 1) = =1 2 2 = =1
2
. But this means

2 (2 2) = ( 2) =1 2 2 2 2 2 = =1 2 2 =1 2 , so that
2 2 =1 2 = 2 2 2 =1 2 (2 2 =1 2 ) = 2 2 2 =1 2 .
2 2 2 =1
2
Finally, we divide through to find = 2 . Plugging in to the other equation
2 2 =1

2 2 2 2
implies that = ( 1) = ( 2 2 =1 2 1) , so that the two method of
=1

2 2 2 =1
2
2 2 2 2
moments estimators are =
2 =1
2 2 and = ( 2 2 =1 2 1) .
=1
Question #3: Assume that 1 , , are independent and identically distributed with
common density (; ), where > 0 is an unknown parameter. Find the maximum
likelihood estimator (MLE) for when the PDF is a) (; ) = 1 whenever 0 < < 1,
b) (; ) = ( + 1) 2 whenever > 1, and c) (, ) = 2 whenever > 0.

a) We first find the likelihood function based on the joint density of 1 , , , which is
() = (1 ; ) ( ; ) = =1 ( ; ) = =1 1 = (1 )1 . Next, we
construct the log likelihood function, since it is easier to differentiate and achieves a
maximum at the same point as the likelihood function. This gives ln[()] =
ln[ (1 )1 ] = ln() + ( 1)[ln(1 ) + + ln( )], which we differentiate

so ln[()] = [ ln() + ( 1) =1 ln( )] = + =1 ln( ). We then solve


for the value of which makes the derivative equal zero, so + =1 ln( ) = 0 =

. Since it is clear that the second derivative of ln[()] is negative, we have
=1 ln( )

found that the maximum likelihood estimator is = . (Note that we
=1 ln( )

must capitalize the from when presenting the estimator.)

b) We have () = =1 ( ; ) = =1( + 1)2 = ( + 1) (1 )2 so that


ln[()] = ln[( + 1) (1 )2 ] = ln( + 1) ( + 2) =1 ln( ). Then we

find ln[()] = [ ln( + 1) ( + 2) =1 ln( )] = +1 =1 ln( ). Finally,


we must solve +1 =1 ln( ) = 0 = 1. Since the second derivative
=1 ln( )

of ln[()] will be negative, we have found that = 1.
=1 ln( )

c) We have () = =1 ( ; ) = =1 2 = 2 (1 ) (1 ++) so that



ln[()] = ln[ 2 (1 ) (=1 ) ] = 2 ln() + =1 ln( ) =1 . Then we
2
have ln[()] = [2 ln() + =1 ln( ) =1 ] = =1 . Finally, we

2 2 2 2
must solve =1 = 0 = = , which implies that = .
=1
Question #4: Assume that 1 , , are independent and identically distributed. Find the
maximum likelihood estimator (MLE) of the parameter if the distribution is a) ~(1, ),
1
b) ~() , c) ~(3, ), d) ~(, 2), e) ~ (, 2), and f) ~(1, ).

a) Since the density of ~(1, ) is (; ) = (1) (1 )1 = (1 )1 , we



have () = =1 ( ; ) = =1 (1 )1 = =1 (1 )=1 and then

ln[()] = ln[=1 (1 )=1 ] = (=1 ) ln() + ( =1 )ln(1 ).

Differentiating gives ln[()] = [(=1 ) ln() + ( =1 )ln(1 )] =


=1
=1
=1
=1
=0 = (1 ) =1 = ( =1 )
1 1


=1
=1 =1 = =1 =1 = = = . Since the

second derivative will be negative, we have found that = .

b) Since (; ) = (1 )1 , we have () = =1 ( ; ) = =1 (1 ) 1 =

(1 )[=1 ] and then the log likelihood function becomes ln[()] =

ln[ (1 )[=1 ] ] = ln() + {[=1 ] }ln(1 ). Differentiating gives
[
=1 ]
ln[()] = [ ln() + {[=1 ] } ln(1 )] = . Equating this
1

[
=1 ] [
=1 ]
with zero implies =0= (1 ) = [=1 ]
1 1
1 1
= =1 = =1 = = 1 = . Since the second
=1
=1

1
derivative will be negative, we have found that = .

(1)!
c) Since ~(3, ), we have (; ) = (1
31
)3 (1 )3 = 2(3)! 3 (1 )3 =
(1)(2) 1
3 (1 )3 = 2 ( 2 3 + 2)3 (1 )3. This implies that the
2
1
likelihood function () = =1 ( ; ) = =1 [2 (2 3 + 2)3 (1 ) 3 ] =

2 (2 3 + 2) 3 (1 )[=1 ]3 , so the log likelihood function ln[()] =

ln[2 (2 3 + 2) 3 (1 )[=1 ]3 ] = ln(2) + ln(2 3 + 2) +

3 ln() + {[=1 ] 3} ln(1 ). Differentiating this then gives ln[()] =

3
[ ln(2) + ln(2 3 + 2) + 3 ln() + {[=1 ] 3} ln(1 )] =

[
=1 ]3 3 3
= 0 = . Therefore, we have that = .
1


1 1
d) Since ~(, 2), we have (; ) = 2 (2) 21 = 2 . This means
1
1 1
() = =1 ( ; ) = =1 [2 ] = 2 (1 ) =1 so that ln[()] =
1
1 1
ln [2 (1 ) =1 ] = 2 ln() + =1 ln( ) =1 . Differentiating
1 2 1
gives ln[()] = [2 ln() + =1 ln( ) =1 ] = + 2 =1 . Then

1 2 1 2
=1
we solve =1 = 0 2 =1 = =1 = 2 2 = = 2.
2 2

Since the second derivative will be zero, we have found that = 2 .

1 1
1 1 1 1
e) Since ~ (, 2), we have (; ) = 2 2
= 2 2
. Thus, we have
1
1 1 1
1 1 1
2
() = =1 ( ; ) = =1 [ 2
] = (1 2
2 ) =1 so that the
2 2 2
1
1 11
1 2
log of the likelihood function is ln[()] = ln [ (1 2
2 ) =1 ] =
2 2

1 1
1
ln(2) 2 ln() + =1 2 =1 2 . Differentiating this gives ln[()] =

1 1
1
=1
[ ln(2) 2 ln() + =1 2 =1 2 ] = + 3 . Setting this equal to
2
22


=1
=1 3
zero and solving implies 2 + 3 =0 3 = 2 2 =1 = 2 2
22 22
2 2

=1
=1

= =[ ] . Therefore, we have found = [ =1 ] .


f) Since ~(1, ), we have (; ) = (1+)+1 so the likelihood function is () =

=1 ( ; ) = =1[(1 + )1 ] = =1(1 + )1. Then we have that


ln[()] = ln[ =1(1 + )1 ] = ln() ( + 1) =1 ln(1 + ). Next, we

compute the derivative so that ln[()] = [ ln() ( + 1) =1 ln(1 + )] =


=1 ln(1 + ). Finally, we set this result equal to zero and solve for to find that


=1 ln(1 + ) = 0 = =1 ln(1 + ) = . Since the second
=1 ln(1+ )

derivative will be negative, we have found that = .
=1 ln(1+ )
Chapter #9 Point Estimation

Question #7: Let 1 , , be a random sample from ~(). Find the Maximum
1 1
Likelihood Estimator (MLE) for a) () = , b) () = , and c) ( > ) = (1 )
2

where {1,2, }. Do it both ways for each part to verify the Invariance Property.

a) We begin by computing by first calculating the likelihood function () =



=1 ( , ) = =1(1 ) 1 = (1 )=1( 1). Then we can compute

ln[()] = ln[ (1 )=1( 1) ] = ln() + [=1( 1)] ln(1 ) and then

=1( 1)
differentiate ln[()] = [ ln() + [=1( 1)] ln(1 )] = .
1


=1
=1
Setting equal to zero and solving for gives =0=
1 1

(1 ) = =1 = =1 = =1 . This then
1
implies that = = . Since the second derivative will be negative, we have
=1

1
found that = . By the Invariance Property of the Maximum Likelihood
1 1 1
Estimator, we have that ( ) = = = as the MLE for () = () = .
1/

1 1 1 11/
b) Since = and () = () = , then ( ) = = (1/)2 = ( 1), by
2 2

the Invariance Property of the Maximum Likelihood Estimator.

1
c) Since = and () = (1 ) , then ( ) = (1 ) = (1 1/) , by the

Invariance Property of the Maximum Likelihood Estimator.

Question #12: Let 1 , , be a random sample from ~(, 2 ). Find the Maximum
Likelihood Estimator (MLE) for a) the parameters and 2 , and b) (, 2 ) = ().

1
1 1 (ln )2
a) We have that the density function of is (; , 2 ) = 22 22 , so that

the likelihood function of the sample is given by (, 2 ) = =1 ( , ) =


1 1
1 1 (ln )2 (ln )2
=1 [
22
22 ] = (2 2 ) 2 (1 )1 22 =1 . Then the log
1
(ln )2
likelihood function is ln[(, 2 )] = ln [(2 2 ) 2 (1 )1 22 =1 ]=
1
2 ln(2 2 ) =1 ln( ) 22 =1(ln )2. We differentiate this with respect

to both parameters and set the resulting expressions equal to zero so we can
1
simultaneously solve for the parameters, so ln[(, 2 )] = 2 =1(ln ) = 0

1
and ln[(, 2 )] = 22 + 24 =1(ln )2 = 0. The first equation implies
2
1
=1(ln )
(ln ) = 0 =1(ln ) = 0 =1(ln ) = 0 =
2 =1
1 1
and the second 22 + 24 =1(ln )2 = 0 24 =1(ln )2 = 22
1 1
=1(ln )2 = 2 = =1(ln )2. Thus, we have that the maximum
2
1 1
likelihood estimators are = =1(ln ) and
2
= =1(ln )2.

b) We know that ~(, 2 ) if and only if = ln() ~(, 2 ). But = ln() if and
2 12 2
only if = implies that () = ( ) = (1) = (1)+ 2 = + 2 . By the
Invariance Property of the Maximum Likelihood Estimator, we can conclude that
1 2 2
2 )
( , 2 ) = ( , = +2 is the MLE for (, 2 ) = () = + 2 .

Question #17: Let 1 , , be a random sample from ~( 1, + 1). a) Show that


(1) +()
the sample mean is an unbiased estimator for ; b) show that the midrange = 2

is also an unbiased estimator for the parameter ; c) which one has a smaller variance?

a) To show that is an unbiased estimator for , we must verify that () = . But we


1 1 1 1 (+1)+(1)
see that () = ( =1 ) = (=1 ) = =1 ( ) = =1 [ ]=
2
1 1
= = , so it is clear that the sample mean is an unbiased estimator for .
=1
(1) +() 1 1
b) We have that () = ( ) = 2 ((1) + () ) = 2 [((1) ) + (() )]. We
2

must therefore compute the mean of the smallest and largest order statistics, which
we can do by first finding their density functions. We first note that since
1 1
~( 1, + 1), then () = (+1)(1) = 2 whenever ( 1, + 1) and
1 1 (1)
() = 1 2 = 2 []1 = whenever ( 1, + 1). Then the
2

distribution function of () is given by () = (() ) = ( ) =


(1) (+1) (+1)1
( ) = so the density function of () is () = () = .
2 2 2
+1
We can then compute the mean of () as (() ) = 1 () =
+1 (+1)1
1 2
. This integral can be calculated by completing the substitution

=+1 so that = and = + 1. This then implies


+1 (+1)1 2 2
1 = 2 0 ( + 1)1 = 2 0 + 1 1 =
2
2
+1 2+1 2 2 2
[ + ] = 2 [ +1 + ] = +1 + 1. We can similarly compute
2 +1 0
2
that the expected value of the first order statistic is ((1) ) = +1 + + 1. Thus, we
1 1 2 2
have that () = 2 [((1) ) + (() )] = 2 [( +1 + 1) + ( + +1 1)] =
1
[2] = , so the midrange is also an unbiased estimator for the parameter .
2

1 1 1
c) We have that () = ( =1 ) = 2 (=1 ) = 2 =1 ( ) =
1 [(+1)(1)]2 1 4 1 1 1 1
=1 = 2 =1 12 = 2 =1 3 = 2 3 = 3. Similarly, we can calculate
2 12
(1) +() 1
that () = ( ) = 4 ((1) + () ).
2
Question #21: Let 1 , , be a random sample from ~(1, ). a) Find the Cramer-Rao
lower bound for the variances of all unbiased estimators of ; b) Find the Cramer-Rao lower
bound for the variances of unbiased estimators of (1 ); c) Find a UMVUE of .

2
[ ()]
a) We have that = 2 , so we compute each of these parts individually.
[( ln (;)) ]

First, we have () = , so () = 1 and [ ()]2 = 1. Next, since ~(1, ) we


know that () = (1) (1 )1 = (1 )1 and so (; ) = (1 )1 ,
which means that ln (; ) = ln() + (1 ) ln(1 ). Taking the derivative and
1 (1)(1) +
squaring gives ln (; ) = 1 = = = (1)
(1) (1)

2 2 2 2+2 2
( ln (; )) = ((1)) = . Finally, we compute [( ln (; )) ] =
2 (1)2

2 2+2 1 1
[ ] = 2 (1)2 ( 2 2 + 2 ) = 2 (1)2 [( 2 ) 2() + 2 ] =
2 (1)2
1 1
[((1 ) + 2 ) 2() + 2 ] = [ 2 + 2 22 + 2 ] =
2 (1)2 2 (1)2
1 (1) 1 (1)
[ 2 ] = = (1). Thus, we have found that = .
2 (1)2 2 (1)2

b) Now, () = (1 ) = 2 , so [ ()]2 = [1 2]2 = 1 4 + 42 , so the


(14+42 )(1)
Cramer-Rao Lower Bound becomes = .

1 1
c) Since for the estimator = , we have ( ) = () = ( =1 ) = (=1 ) =
1 1 1 1
=1 ( ) = =1 = = and then ( ) = () = ( =1 ) =

1 1 1 1 (1)
(=1 ) = 2 =1 ( ) = 2 =1 (1 ) = 2 (1 ) = =
2

, we can conclude that = is a Uniform Minimum Variance Unbiased


Estimator (UMVUE) for the parameter in ~(1, ).
Question #22: Let 1 , , be a random sample from ~(, 9). a) Find the Cramer-Rao
lower bound for the variances of unbiased estimators of ; b) is the Maximum Likelihood
Estimator = a UMVUE for the parameter ?

a) We have () = , so () = 1 and [ ()]2 = 1. Next, since ~(, 9) we know that


1 1
1 ()2 1 ()2
the density is () = 18 , so that we have (; ) = 18 and
18 18
1 1
ln (; ) = 2 ln(18) 18 ( )2. We then differentiate twice to obtain
1 2 1
ln (; ) = ( ) ln (; ) = . Since we have shown than the
9 2 9

2 2 1 1
expression [( ln (; )) ] = [2 ln (; )] and ( 9) = 9, we can
9
conclude that the Cramer-Rao Lower Bound is = . This then means that
9
() for any unbiased estimator of the parameter in ~(, 9).

1 1 1
b) We first verify that ( ) = () = ( =1 ) = (=1 ) = =1 ( ) =
1 1
= = , so = is an unbiased estimator for . Then we compute
=1
1 1 1
( ) = () = ( =1 ) = 2 (=1 ) = 2 =1 ( ) =
1 1 9
=1 9 = 9 = = , so that = a UMVUE for the parameter .
2 2

Question #23: Let 1 , , be a random sample from ~(0, ). a) Is the Maximum


Likelihood Estimator (MLE) for unbiased?; b) is the MLE also a UMVUE for ?

a) We first find by noting that since ~(0, ), then its density function is () =
1 2 1 2
1 1
2 so the likelihood function is () = =1 (; ) = =1 2 =
2 2
1
2 1
(2) 2 2 =1 and then ln[()] = 2 ln(2) 2 =1 2 . Next, we
2 1 1
differentiate so that ln[()] = 4 + 22 =1 2 = 0 22 =1 2 = 2

1
=1 2 = = =1 2 . Since the second derivative is negative, we have

1 1
= =1 2 . We verify unbiasedness by computing ( ) = ( =1 2 ) =
1 1 1 1 1
(=1 2 ) = =1 (2 ) = =1( + 02 ) = =1 = = .

b) The estimator will be a UMVUE for if ( ) = . We therefore begin


by computing the Cramer-Rao Lower Bound. First, we have () = , so () = 1
1 2
1
and [ ()]2 = 1. Next, since we previously found that (; ) = 2 , then we
2
1 1 1 2
have ln (; ) = 2 ln(2) 2 2 so that ln (; ) = 2 + 22. We then find
2 1 2 2 1 2
ln (; ) = 22 23 = 22 3 and take the negative of its expected value to
2
1 2 1 1 1 1 1 1 1
obtain [22 3 ] = [22 3 ( 2 )] = 3 ( + 02 ) 22 = 2 22 = 22. This
22
implies that = . We must verify that the variance of our estimator is equal

1
to this lower bound, so we compute ( ) = ( =1 2 ) =
1 1
(=1 2 ) = 2 =1 (2 ). In order to compute (2 ), we use the formula
2

(2 ) = (4 ) [(2 )]2 = = 3 2 2 = 2 2 by finding the moments of


using the derivatives of the Moment Generating Function at = 0. Then we have that
1 1 1 2 2
( ) = 2 =1 (2 ) = 2 =1 2 2 = 2 2 2 = = , which verifies

that the Maximum Likelihood Estimator is a UMVUE for the parameter .


Chapter #9 Point Estimation

Question #31: Let and be the MLE and MME estimators for the parameter , where
1 , , is a random sample of size from a Uniform distribution such that ~(0, ).
Show that a) is MSE consistent, and b) is MSE consistent.

a) We first derive the MLE for . Since ~(0, ), we know that the density
1
function is (; ) = for (0, ). This allows us to construct the likelihood

function () = =1 ( ; ) = =1 1 = whenever 1: 0 and : and


zero otherwise. Then the log likelihood function is ln[()] = ln() so that

ln[()] = < 0 for all and . This means that () = is a decreasing

function of for : since its first derivative is always negative, so we can


conclude that the MLE is the largest order statistic, so = : . Next, we show that
this estimator is MSE consistent, which means verifying that lim [: ]2 = 0.

But then we can see that lim [: ]2 = lim [:


2
2: + 2 ] =

2 2 ].
lim [(: ) 2(: ) + In order to compute this limit, we must find the first

and second moments of the largest order statistic. But we already know that () =
1 1
(; )(; )1 = () = , so we can calculate (: ) = 0 () =


1 +1 +1
0 = 0 = [ +1 ] = ( +1 0) = +1 and 2 )
(: =
0

1 +2 +2
0 2 () = 0 2
= 0 +1 = [ +2 ] = ( +2 ) = +2 2 .
0

2
Thus, we have lim [(: ) 2(: ) + 2 ] = lim [+2 2 2 +1 + 2 ] =

2
lim [ 2 +1 2 + 2 ] = 2 2 2 + 2 = 0. That this limit is zero verifies that
+2

the maximum likelihood estimator = : is mean square error (MSE) consistent.


b) We first derive the MME for . Since ~(0, ), we know that () = 2 so we
1
can equate 1 = 1 = =1 = = 2. This means that = 2.
2 2
Next, we show that this estimator is MSE consistent, which means verifying that
lim [2 ]2 = 0. But we have lim [2 ]2 = lim [4 2 4 + 2 ] =

1
lim [4( 2 ) 4() + 2 ]. We therefore compute () = ( =1 ) =

1 1 1 1
(=1 ) = =1 ( ) = =1 2 = 2 = and ( 2 ) = () + ()2 =
2

1 2 1 2 1 2 2 1 2 2
( =1 ) + (2) = 2 =1 ( ) + = 2 =1 12 + = 2 12 + =
4 4 4
2 2 (3+1) 2
+ = . Thus, we can compute that lim [4( 2 ) 4() + 2 ] =
12 4 12
(3+1)2 (3+1)2
lim [4 4 + 2 ] = lim [ 2 2 + 2 ] = 2 2 2 + 2 = 0. That
12 2 3

this limit is zero verifies that the MME = 2 is mean square error (MSE) consistent.

Question #29: Let 1 , , be a random sample of size from a Bernoulli distribution such
that ~(1, ). For a Uniform prior density ~(0,1) and a squared error loss
function (; ) = ( )2 , a) find the posterior distribution of the unknown parameter ,
b) find the Bayes estimator of , and c) find the Bayes risk for the Bayes estimator of above.

(1 ,, ;)()
a) We have that the posterior density is given by P| () = , where
(1 ,, ;)()

(1 , , ; ) = =1 ( ; ) = =1 (1 )1 = =1 (1 )=1 since
the random variables are independent and identically distributed and () = 1 since
1
the prior density is uniform. We then express 0 =1 (1 )=1 in terms of
the beta distribution. Recall that if ~(, ), then its density is (; , ) =
1 ()()
1 (1 )1 where (, ) = . Next, we must define = =1 and
(,) (+)
1
= =1 , so we can write 0 =1 (1 )=1 = ( + 1, + 1) =

=1 (1)=1
(=1 + 1, =1 + 1). Thus, we have P| () = ( =
=1 +1,=1 +1)

1
(1 ) , which verifies that the random variable given by
(+1,+1)

P|~(=1 + 1, =1 + 1) ( + 1, + 1).

b) For some random variable ~(, ), we know that () = +. Moreover,

Theorem 9.5.2 states that when we have a squared error loss function, the Bayes
estimator is simply the expected value of the posterior distribution. This implies that

=1 +1
=1 +1
the Bayes estimator of is given by = = .
=1 +1+=1 +1 +2


=1 +1
c) The risk function in this case is () = [( )2 ], where = is the Bayes
+2

Estimator derived above. We would therefore substitute for in the risk function,
1
evaluate the expected value of that expression and then compute 0 [( )2 ] .

Question #34: Consider a random sample of size from a distribution with discrete
probability mass function (; ) = (1 ) for {0,1,2, }. a) Find the MLE of the
1
unknown parameter . b) Find the MLE of = . c) Find the CRLB for variances of all

1
unbiased estimators of the parameter above. d) Is the MLE of = a UMVUE? e) Is the

1
MLE of = also MSE consistent? f) Compute the asymptotic distribution of the MLE of

1
= . g) If we have the estimator = +1 , then find the risk functions of both and

()2
using the loss function given by (; ) = .
2 +


a) We have () = =1 ( ; ) = =1(1 ) = (1 )=1 , so that

=1
ln[()] = ln() + =1 ln(1 ). Then we have that ln[()] = .
1
1
Setting this equal to zero and solving for gives the estimator = 1+.

1
b) By the Invariance Property, we have that the estimator is = = .

1 1 1 1
c) Since = () = = 1, then () = 2 and [ ()]2 = 4. Then since

(; ) = (1 ) , we can compute ln (; ) = ln() + ln(1 ) so that


1 2 1
ln (; ) = 1 and ln (; ) = 2 (1)2. We can then compute the
2

2
negative of the expected value of this second derivative so that [2 ln (; )] =
1 1 1 1 (1)2 +(1) 12+2 +2 1 1
+ (1)2 () = 2 + (1)2 = = = 2 (1)2 = 2 (1).
2 2 (1)2 2 (1)2

2 (1) 1
These results imply that = = .
4 2

1 1 1 1 1
d) We first verify that ( ) = () = ( =1 ) = =1 ( ) = = ,
1
so that the MLE is an unbiased estimator of = . Next, we compute ( ) =

1 1 1 1 1
() = ( =1 ) = 2 =1 ( ) = 2 2 = 2 = , which verifies
1
that = is the UMVUE for the parameter = .

1 2
e) To verify that = is MSE consistent, we must show that lim [ ] = 0.

1 2(1) (1) 2 2
But we can see that we have lim [ ] = lim [ 2 + 2 ] =

2(1) (1)2
lim [( 2 ) () + ], so we must compute the expectation of both the
2
1
mean and the mean squared. However, we already know that () = = since

1 1 2
is unbiased. Then ( 2 ) = () + ()2 = ( =1 ) + ( ) =
1 (1)2 1 1 (1)2 1 1 (1)2 1 (1)2
=1 ( ) + = 2 =1 + = 2 + = + .
2 2 2 2 2 2 2 2

2(1) (1) 1 (1) 2


2(1) 1 (1) 2 2
Thus lim [( 2 ) () + 2 ] = lim [ 2 + 2 + ]=
2

(1)2 2(1)2 (1)2
+ = 0. This shows that = is MSE consistent.
2 2 2

f) We use Definition 9.4.5, which states that for large values of , the MLE estimator is
1
distributed normal with mean = and variance . Since we previously found

1 1 1
that = 2
, we can conclude that ~ ( , 2 ).
g) Definition 9.5.2 states that the risk function is the expected loss () = [(; )].
()2 2 2+2
In this case, the loss function is (; ) = = . Therefore, for the
2 + 2 +
2 2 +2 1
estimator we compute () = [ ] = 2 + [( 2 ) 2() + 2 ] =
2 +
1 1 (1)2 1 1 1 2
[ 2 + 2 + 2 ] = 2 + [ 2 + 2 2 2 + 2 ] = (2 +) =
2 + 2
(1)/ 1 1
(+1)
= [(1)/+1] = [1/]2 = = . Similarly, for the estimator = +1 we

2
( ) 2( )+2 +
can compute () = [ +1 +1
] = = (+1)(+1)2 .
2 +

Question #36: Let 1 , , be a random sample of size from a Normal distribution such
that each ~(0, ). Find the asymptotic distribution of the MLE of the parameter .

1
From the previous assignment, we know that = =1 2 . We then use

Definition 9.4.5, which states that for large values of , the MLE estimator is
distributed normal with mean and variance . That is, we have that
~(, ). This means that we must compute the Cramer-Rao Lower Bound.
Since () = , then () = 1 and [ ()]2 = 1. Next, since we previously found that
1 2
1 1 1
(; ) = 2 , then we have ln (; ) = 2 ln(2) 2 2 so that
2
1 2 2 1 2 2 1 2
ln (; ) = 2 + 22. We then find 2 ln (; ) = 22 23 = 22 3 and take

1 2 1 1
the negative of its expected value to obtain [22 3 ] = [22 3 ( 2 )] =
1 1 1 1 1 22
3
( + 02 ) = 2 22 = 22. This implies that = . Combining these
22
2 2
facts reveals that the asymptotic distribution of the MLE is ~ (, ). We can

transform this to get a standard normal distribution by noting that the random


variable ~(0,1) or large values of . We could further reduce this by
2/



multiplying through by the constant so that ~(0, 2 ).
2/
Chapter #10 Sufficiency and Completeness

Question #6: Let 1 , , be independent and each ~( , ). Use the Factorization


Criterion to show that = =1 is sufficient for the unknown parameter .

Since each ~( , ), we know that the probability mass function is given by


(; , ) = ( ) (1 ) 1{ = 0, , }. We can then construct their joint
probability mass function due to the fact that they are independent as

(1 , , ; , ) = =1 ( ; , ) = =1 (


) (1 ) 1{ = 0, , } =


[=1 (


)] =1 (1 )=1 =1 1{ = 0, , }. If = [=1 (


)], then we


=1 =1
have that =1 =1 =1 1{ = 0, , } =
1{ = 0, , } =
=1

=1
( ) =1 1{ = 0, , }. But then if we define = =1 , we have that


(1 , , ; , ) = ( ) =1 1{ = 0, , } = (; , )(1 , , ). Since


(; , ) = ( ) =1 does not depend on 1 , , except through = =1

and (1 , , ) = 1{ = 0, , } does not involve , the Factorization Criterion


guarantees that = =1 is sufficient for the unknown parameter .

Question #7: Let 1 , , be independent and each ~( , ). This means that each

has probability mass function ( = ) = (1


1
) (1 ) for = , + 1, + 2,

Find a sufficient statistic for the unknown parameter using the Factorization Criterion.

As in the question above, we have that (1 , , ; , ) = =1 ( ; , ) =

=1 (1
1
) (1 ) 1{ = , + 1, }. After applying the product operator,


this becomes [=1 (1
1
)] =1 =1 =1 1{ = , + 1, }. Then if we define


=1 =1
= [=1 (1
1
)], this expression becomes 1{ = , + 1, } =
=1

=1
( ) =1 1{ = , + 1, }. Finally, if we let = =1 , we have that the

=1
joint mass function is (1 , , ; , ) = ( ) 1{ = , + 1, } =

=1
(; , )(1 , , ). Since (; , ) = ( ) does not depend on 1 , ,

except through = =1 and (1 , , ) = 1{ = , + 1, } does not involve


, the Factorization Criterion guarantees that = =1 is sufficient for .

Question #16: Let 1 , , be independent and each ~( , ). This means that each

has mass function ( = ) = (1


1
) (1 ) for = , + 1, + 2, Find the

Maximum Likelihood Estimator (MLE) of by maximizing the MLE of the sucient statistic.

In the previous question, we found that () = (1 , , ; , ) =



[=1 (1
1
)] =1 (1 )=1 (1 ) =1 . Taking the natural logarithm gives

ln[()] = =1 (1
1
) + =1 ln() + =1 ln(1 ) =1 ln(1 ). Then

differentiating the log likelihood function and equating to zero implies that

=1
=1
=1
ln[()] = + = 0 (1 ) =1 =1 + =1 = 0.
1 1

Then we have =1 =1 =1 + =1 = 0 =1 =1 = 0.

This implies that the Maximum Likelihood Estimator of is = =1 .
=1

Question #12: Let 1 , , be independent and identically distributed from a two


parameter exponential distribution (, ) such that the probability density function is
+
1
(; , ) = 1{ > }. Find jointly sufficient statistics for the parameters and .

Since the random variables are iid, their joint probability density function is thus
+
given by (1 , , ; , ) = =1 ( ; , ) 1{ > } = =1 1 1{ > } =
1
(=1 ) [=1 1{ > }]. This then shows that 1 = =1 and 2 = 1: are
jointly sufficient for and by the Factorization Criterion with (1 , , ) = 1 being
independent of the unknown parameters and and (1 , 2 ; , ) =
1
(1 )
=1 1{2 > } depending on 1 , , only through 1 and 2 .

Question #13: Let 1 , , be independent and identically distributed from a beta


distribution (1 , 2 ) such that the probability density function of each of these random
( + )
variables is given by (; 1 , 2 ) = ( 1)(2 ) 1 1 (1 )2 1 whenever 0 < < 1. Find
1 2

jointly sufficient statistics for the unknown parameters 1 and 2 .

Since the random variables are iid, their joint density is given by (1 , , ; 1 , 2 ) =
( + )
1
=1 ( ; 1 , 2 ) 1{0 < < 1} = =1 1 2
1 (1 )2 1 1{0 < < 1} =
( )( ) 1 2

( + )
[( 1)(2 )] [1 ]1 1 [(1 1 ) (1 )]2 1 =1 1{0 < < 1}. This then
1 2

shows that 1 = =1 and 2 = =1(1 ) are jointly sufficient for 1 and 2 by


the Factorization Criterion with (1 , , ) = =1 1{0 < < 1} being
independent of the unknown parameters and (1 , 2 ; 1 , 2 ) =
( + )
[( 1)(2 )] [1 ]1 1 [2 ]21 depending on the observations only through 1 and 2 .
1 2

Question #18: Let ~(0, ) for > 0. a) Show that 2 is complete and sufficient for the
unknown parameter , and b) show that (0, ) is not a complete family.

2
1
a) Since ~(0, ), we know that (; ) = 2 for . Therefore, by the
2

Regular Exponential Class (REC) Theorem, 2 is complete and sufficient for .

b) Since ~(0, ), we know that () = 0 for all > 0. Therefore, completeness fails
because we have a nontrivial unbiased estimator of () = 0.
Chapter #10 Sufficiency and Completeness

Question #21: If 1 , , is a random sample from a Bernoulli distribution such that each
~() (1, ) where is the unknown parameter to be estimated, find the
UMVUE for a) () = () = (1 ), and b) () = 2.

a) We first verify that the Bernoulli distribution is a member of the Regular Exponential
Class (REC) by noting that its density can be written as (; ) = (1 )1 =

(1)1
= (1) (1 ) = (1) (1 ) = exp {ln [(1) (1 )]}. This equality

implies (; ) = exp { ln (1) + ln(1 )} = exp { ln (1)} exp{ln(1 )} =

(1 ) exp { ln ( )} = () exp{1 ()1 ()}, so the Bernoulli distribution is a
1

member of the REC by Definition 10.4.2. We then use Theorem 10.4.2, which
guarantees the existence of sufficient statistics for distribution from the REC, to
construct the sufficient statistic 1 = =1 1 ( ) = =1 . Next, we appeal to the
Rao-Blackwell Theorem in justifying the use of 1 (or any one-to-one function of it) in
our search for a UMVUE for () = (1 ). Our initial guess for an estimator is
= (1 ), so we first compute that () = [(1 )] = () ( 2 ) =
1 1 1
( =1 ) [() + ()2 ] = =1 ( ) 2 =1 ( ) + ()2. Thus,

1 1 1 2 (1)
we have calculated that () = 2 (1 ) ( ) = 2 =

1 1
(1 ) (1 ) = (1 ) ( ), which implies that = 1 = 1 [(1 ]

will have expected value equal to () = (1 ). The Lehman-Scheffe Theorem


finally guarantees that is a UMVUE for () = (1 ) since it states that any
unbiased estimator which is a function of complete sufficient statistics is a UMVUE.

b) We note that for the complete sufficient statistic 1 = =1 , we have (1 ) =


and (12 ) = (1 ) since 1 ~(, ), which is true because it is the sum of
independent Bernoulli random variables. This implies (12 ) = (12 ) + (1 )2 =
(1 ) + ()2 = (1 ) + 2 2 . By the Lehman-Scheffe Theorem, we know
that we must use some function of the complete sufficient statistic 1 to construct a
UMVUE for the unknown parameter 2 . We note that for = 1 12 , we have () =
(1 ) (12 ) = (1 ) + 2 2 = + 2 + 2 2 = 2 ( + 2 ).
2
1 1 12
=1 +(=1 )
This implies that the statistic = +2 = = will have
+2 +2

expected value equal to 2 , so it is a UMVUE by the Lehman-Scheffe Theorem.

Question #23: If 1 , , is a random sample from a Normal distribution such that each
~(, 9) where is unknown, find the UMVUE for a) the 95th percentile, and b) (1 ),
where is a known constant. Hint: find the conditional distribution of 1 given = , and
apply the Rao-Blackwell Theorem with = (1 ), where define () = 1{ }.

a) The 95th percentile of a random variable from a (, 9) distribution is the value of



such that ( ) = 0.95 ( ) = 0.95 ( ) = 0.95 where
3 3 3

~(0,1). From tabulations of the standard normal distribution function (), we



know that ( 1.645) = 0.95, so we equate = 1.645 = 4.935 + = ().
3

This is what we wish to find a UMVUE for, but since the expectation of a constant is
that constant itself, we simply need to find a UMVUE for . We begin by verifying that
the Normal distribution is a member of the Regular Exponential Class (REC) by noting
1 1
that the density of ~(, 9) can be written as (; ) = exp { 18 ( )2 } =
18
1 2 2
exp { 18 + 18}, where we have that 1 () = 2 and 2 () = . Thus, the
18 9

Normal distribution is a member of the REC by Definition 10.4.2. We then use


Theorem 10.4.2, which guarantees the existence of sufficient statistics for
distribution from the REC, to construct the sufficient statistics 1 = =1 1 ( ) =
=1 2 and 2 = =1 2 ( ) = =1 . Since the sample mean is an unbiased

estimator for the population mean, we have that () = () = ( 2 ) = . Thus, an

unbiased estimator for () = = 4.935 + is given by = 4.935 + , which is


also a UMVUE of () by the Lehmann-Scheffe Theorem.
1
b) Note that we are trying to estimate (1 ) = ( ) = ( ) = (),
3 3 3

where : (0,1) is the cumulative distribution function of ~(0,1). Since () is


a nonlinear function of , we cannot simply insert to obtain a UMVUE. To find an
unbiased estimator, we note that (1 ) = 1{1 } is unbiased for () since we
have [(1 )] = [1{1 }] = (1 ) = (). But since it is not a function of
the complete sufficient statistic 2 = =1 , this estimator cannot be a UMVUE.
However, the Rao-Blackwell Theorem states that [(1 )|2 ] = [1{1 }|2 ] will
also be unbiased and will be a function of 2 = =1 . The Lehmann-Scheffe
Theorem then guarantees that [1{1 }|2 ] will be a UMVUE. In order to find this,
we must compute the conditional distribution of 1 given 2 . We know that the
random variable 2 = =1 ~(, 9) and that 1 = , 2 = is equivalent to
2 2
exp{( ) /2( ) }
1 = , =2 = . This implies that 1 |2 (|) = = ,
2
9(1) 9(1)
where = and ( )2 = . Therefore, if we let ~ ( , ) we have that

/
[1{1 }|2 ] = ( ) = ( ), which is a UMVUE for ( ) = ().
3(1)/ 3

Question #25: If 1 , , is a random sample from the probability density function


(; ) = 1 1{0 < < 1} where > 0 is the unknown parameter, find the UMVUE for
1 1
a) () = by using the fact that [ ln()] = , and b) the unknown parameter .

a) We first verify that the density is a member of the REC by nothing that it can be
written as (; ) = 1 = exp{ln[ 1 ]} = exp{ln() + ( 1) ln()} =
exp{ln()} exp{( 1) ln()} = exp{( 1) ln()}, where 1 () = ln(). We then
use Theorem 10.4.2, which guarantees the existence of sufficient statistics for REC
distributions, to construct the sufficient statistic 1 = =1 1 ( ) = =1 ln( ). Next,
we appeal to the Rao-Blackwell Theorem in justifying the use of 1 (or any one-to-one
1
function of it) in our search for a UMVUE for . From the hint provided, we initially
1
=1 ln( )
=1 ln( )
guess that = = and check that () = [ ]=

1 1 1 1
[ ln( )] = = . The Lehman-Scheffe Theorem finally guarantees that
=1
1 1
= =1 ln( ) is a UMVUE for since it states that any unbiased estimator which

is a function of complete sufficient statistics is a UMVUE.

b) Any UMVUE of the unknown parameter must be a function of the complete and
sufficient statistic 1 = =1 ln( ) by the Lehman-Scheffe Theorem. We begin by

noting that (1 ) = [=1 ln( )] = =1 [ln( )] = =1 [ ln( )] = , so
1 1
we would like to be able to compute ( ) . However, this involves finding
1 (1 )
1 1
[ ln()] since we know that [ ln()] = . We do this by finding the distribution
1
of = ln() using the CDF technique, which shows that ~ () with density
1
(; ) = 1{ > 0}. This is equivalent to ~ ( , 1), so by the Moment
1
Generating Function technique, we see that 1 = =1 ln( ) ~ ( , ). We can
1 1
thus calculate [ ln()] = () 0 1 = = 1, which implies that =

1 1
= is an unbiased estimator of . Then the Lehmann-Scheffe Theorem
=1 ln( )

guarantees that it is also a UMVUE for the unknown parameter .

Question #31: If 1 , , is a random sample from the probability density function


(; ) = (1 + )(1+) 1{ > 0} for unknown > 0, find a) the MLE of , b) a complete and
1 1
sufficient statistic for , c) the CRLB for () = , d) the UMVUE for () = , e) the mean

and variance of the asymptotic normal distribution of the MLE, and f) the UMVUE for .

a) We have () = =1 ( ; ) = =1 (1 + )(1+) = [=1(1 + )](1+) so



that ln[()] = ln() (1 + ) =1 ln(1 + ). Then we have that ln[()] =


=1 ln(1 + ) = 0 = so that = .
=1 ln(1+ ) =1 ln(1+ )
b) To check that it is a member of the REC, we verify that we can write the probability
density function of as (; ) = (1 + )(1+) = exp{ln[(1 + )(1+) ]} =
exp{ln() (1 + ) ln(1 + )}, where 1 () = 1 and 2 () = ln(1 + ). Thus, (; )
is a member of the REC and 2 = =1 2 ( ) = =1 ln(1 + ) is a complete and
sufficient statistic for the unknown parameter to be estimated.

1 1 2 1
c) Since () = , we have [ ()]2 = [ 2 ] = 4. Then we have that (; ) =

(1 + )(1+) so its log is ln (; ) = ln() (1 + ) ln(1 + ) and ln (; ) =
1 2 1 2 1
ln(1 + ). Finally, ln (; ) = so that [ ln (; )] = . These
2 2 2 2
(1/4 ) 1
results combined allow us to conclude that = (1/2 ) = 2 .

d) We previously verified that this density is a member of the REC and that the statistic
2 = =1 ln(1 + ) is complete and sufficient for . Next, we use the Rao-Blackwell
Theorem in justifying the use of 2 (or any one-to-one function of it) in our search for
1
a UMVUE for . In order to compute (2 ), we need to find the distribution of the

random variable = ln(1 + ), which we do using the CDF technique. We thus have
that () = ( ) = (ln(1 + ) ) = ( 1) = ( 1), so that

then () = () = ( 1) = ( 1) = [(1 + 1)(1+) ] =

( )(1+) = (+) = = whenever > 0. It is


1
immediately clear that ~(), so that () = [ln(1 + )] = . This allows us to

find (2 ) = [=1 ln(1 + )] = =1 [ln(1 + )] = . Since we want an unbiased
1 2 1
estimator for , it is clear that = = =1 ln(1 + ) will suffice by the LST.


e) We previously found that the MLE for is = . From Chapter 9, we
=1 ln(1+ )

know that the MLE for some unknown parameter has an asymptotic normal
distribution with = and 2 = ; that is, ~(, ) for large . We
must therefore find the Cramer-Rao Lower Bound, which can be easily done from the
2
work in part c) above with () = , so that = . This means that we have

2 1
~ (, ) for large . We can similarly argue for the MLE of () = , where we
1 1
see that = = =1 ln(1 + ) by the Invariance Property of the Maximum

Likelihood Estimator. Then using the work done in part c) above for the Cramer-Rao
1 1
Lower Bound, we can conclude that ~ ( , 2 ) for large .

f) We previously verified that this density is a member of the REC and that the statistic

2 = =1 ln(1 + ) is complete and sufficient for where (2 ) = . As in the
1 1
previous question, we have that ( ) = ( ) = 1 which implies that
2 =1 ln(1+ )

1 1
= = is unbiased and a UMVUE for the unknown parameter .
2 =1 ln(1+ )
Chapter #11 Interval Estimation

Question #5: If 1 , , is a random sample from (; ) = 1{ > } with unknown,


then a) show that = 1: is a pivotal quantity and find its distribution, and b) derive a
100% equal-tailed confidence interval for the unknown parameter .

a) We first find the distribution of the smallest order statistic 1: using the formula
1 (; ) = (; )[1 (; )]1 . We thus need the CDF of the population, which

is given by (; ) = (; ) = = = [ ] =

( + ) = 1 whenever > . We therefore have that 1 (; ) =


[1 (1 )]1 = [ ]1 = + = () when
> . Now that we have the density of 1: , we can use the CDF technique to find the
density of = 1: . Thus, we have () = ( ) = (1: ) =

(1: + ) = 1 ( + ) so () = 1 ( + ) = 1 ( + ) = whenever
1
+ > > 0. This reveals that = 1: ~ (), so it is clearly a pivotal

quantity since it is a function of but its distribution does not depend on .

b) We have ((1)/2 < < (1+)/2 ) = ((1)/2 < 1: < (1+)/2 ) = , so


after solving for the unknown parameter we obtain the 100% equal tailed
confidence interval (1: (1+)/2 < < 1: (1)/2 ) = . This can also be
expressed as the random interval (1: (1+)/2 , 1: (1)/2 ). Finally, we know
1
that the () distribution has CDF () = 1 so that ( ) = implies
1
1 = . We solve this last equality for = ln(1 ). This means that the
1 1 1 1+
confidence interval becomes (1: + ln ( ) , 1: + ln ( )), where each term is
2 2
1 1+ 1
found by substituting = and = into the expression = ln(1 ).
2 2
2 2 / 2
Question #7: If 1 , , is a random sample from (; ) = 2 1{ > 0} with
2 2
=1 2 2
=1
unknown parameter , a) show that = ~ 2 (2), b) use = to derive an
2 2

equal-tailed 100% confidence interval for , c) find a lower 100% confidence limit for
2 / 2
( > ) = , d) find an upper 100% confidence limit for the percentile.

2 2 / 2
a) Since (; ) = 2 1{ > 0}, we know that ~(, 2). The CDF technique

then reveals that 2 ~( 2 ) so that =1 2 ~( 2 , ). A final application of


2
the CDF technique shows that =1 2 ~ 2 (2), proving the desired result. This
2
2 2
=1
also shows that = is a pivotal quantity for the unknown parameter .
2

2 2
b) We find that the confidence interval is (1 (2) < < 1+ (2)) =
2 2

2 2 2 2 2 2 2
(1 (2) < 2 =1 2 < 1+ (2)) = (2 =1(2) < 2 < 2 =1(2) ) = .
2 2 1+ 1
2 2

2 2
=1 2 2
=1
Taking square roots gives the desired random interval ( 2 (2) , 2 (2) ).
1+ 1
2 2

2 2
=1
c) From the work done above, a lower confidence limit for is . Since the
2 (2)
2 / 2
quantity () = is a monotonically increasing function of , we can simply
2 2
=1 2 / 2
substitute for into the expression () = by Corollary 11.3.1.
2 (2)

d) We must solve the equation ( > ) = 1 for . From the question above, we
2 / 2 2 2
are given that ( > ) = so we must solve / = 1 for , which gives
2 2
= ln(1 ). By the same reasoning as above, we substitute 2 =1(2) in for
1

2 2
into the expression = () = ln(1 ) to obtain = 2 =1(2) ln(1 ).
1
Question #8: If 1 , , is a random sample from ~(0, ) with > 0 unknown and
: is the largest order statistic, then a) find the probability that the random interval given
by (: , 2: ) contains , and b) find the value of the constant such that the random
interval (: , : ) is a 100(1 )% confidence interval for the parameter .

a) We have that (: , 2: ) if and only if < 2: , since the inequality < :


will always be true by the definition of the density. We must therefore compute

(2: > ) = (: > 2) = 1 (: 2) = 1 [ ( 2)] = 1 2 .

b) As above, we have that [ (: , : )] = 1 , so if we set this equal to 1


and solve for the value of the constant, we obtain 1 = 1 = 1/ .

Question #13: Let 1 , , be a random sample from ~(, ) such that their
1
common distribution is (; , ) = () 1 / 1{ > 0} with the parameter known

but unknown. Derive a 100(1 )% equail-tailed confidence interval for based on the
sufficient statistic for the unknown parameter .

We begin by noting that the given density is a member of the Regular Exponential
Class (REC) since (; , ) = ()1 1 / = ()() exp{1 ()1 ()}
where 1 () = . Then we know that = =1 1 ( ) = =1 is complete sufficient
for the unknown parameter . Next, we need to create a pivotal quantity from ; from
2 2
the distribution in question 7 which is similar, we guess that = = =1

might be appropriate. We now derive the distribution of and, by showing that it is


simultaneously a function of but its density does not depend on , will verify that it
is a pivotal quantity. Since the ~(, ), we know that the random variable
2
= =1 ~(, ). Then () = ( ) = ( =1 ) =

(=1 ) = ( ) = ( 2 ) so that () = ( 2 ) = ( 2 ) 2 =
2 2

1 1
1
( ) ( 2 )/ 2 = = 2 () 1 /2 , which shows that the
() 2
2
transformed random variable = =1 ~(2, ) 2 (2). This allows
2 2
to compute [/2 (2) < < 1/2 (2)] = 1 , so after substituting in for
2
=1 2
=1
and solving for , we have [2 < < 2 ] = 1 , which is the desired
1/2 (2) /2 (2)

100(1 )% equail-tailed confidence interval for based on the sufficient statistic


= =1 for the unknown parameter .

You might also like