Professional Documents
Culture Documents
Question #2: Let ~(0,1). Use the CDF technique to find the PDF of the following
random variables: a) = 1/4 , b) = , c) = 1 , and d) = (1 ).
1 (0,1)
a) Since ~(0,1), we know that the density function is () = {
0
0 (, 0]
while the distribution function is () = { (0,1) . We can then use the
1 [1, )
CDF technique to find () = ( ) = (1/4 ) = ( 4 ) = ( 4 ), so
that () = () = ( 4 ) = ( 4 )4 3 = (1)4 3 . Since 0 < < 1, the
3
bounds are 0 < 4 < 1 or 0 < < 1. Therefore, () = {4 (0,1).
0
0 < < 1, the bounds are 0 < () < 1 or 1 < < 1. The probability density
1
( 1 , 1)
function of the random variable is therefore () = { .
0
d) We will use Theorem 6.3.2 for this question. Suppose that is a continuous random
variable with density () and assume that = () is a one-to-one transformation
with inverse = (). If () is continuous and nonzero, the density of is given
by () = (()) | ()|. Here, the transformation is = (1 ) = 2 +
and, based on the y-values of the graph over the x-values of (0,1), the range over
1
which the density of is defined is (0, 4). Since this transformation is not one-to-one,
1 1
we must partition the interval (0,1) into two parts: (0, 2) and (2 , 1). Then solve the
1 1 1 1 1
() = (2 + 4 ) | ()| + (2 4 ) | ()| = (1 + 1)| ()| = 1
,
4
1
1 1
2
so we can conclude that the density function is () = {(4 ) (0, 4).
0
Question #3: The measured radius of a circle has PDF () = 6(1 ) if (0,1) and
() = 0 otherwise. Find the distribution of a) the circumference and b) area of the circle.
3()
0 < < 1, then 0 < < 1 so that 0 < < and () = { (0, ).
3/2
0
1
Question #10: Suppose has density () = 2 || for all . a) Find the density of the
random variable = ||. b) If = 0 when 0 and = 1 when > 0 find the CDF of .
we have the bounds < < , the bounds become 0 < < . This allows us to
(0, )
write the probability density function of as () = { .
0
1 1
b) We see that ( = 0) = 2 and ( = 1) = 2 since () is symmetric. This allows us
0 (, 0)
1
to write the cumulative distribution function as () = { 2 [0,1) .
1 [1, )
1
Question #13: Suppose has density () = 24 2 for (2,4) and () = 0 otherwise.
We will use Theorem 6.3.2 for this question: Suppose that is a continuous random
variable with density () and assume that = () is a one-to-one transformation
with inverse = (). If () is continuous and nonzero, the density of is given
by () = (()) | ()|. Here, the transformation is = 2 and, based on the
y-values of the graph over the x-values of (0,1), the domain over which the density of
is defined is (0,16). Solving the transformation then gives that = () = so
1
that () = 2 . We must consider two cases in the interval (0,16): over (0,4) the
transformation is not one-to-one and over (4,16) it is one-to-one. The density is thus
()| ()| + ()| ()| (0,4) (0,4)
24
() = { ()| ()| [4,16) = { .
[4,16)
48
0 0
Question #16: Let 1 and 2 be independent random variables each having density function
1
() = 2 for [1, ) and () = 0 otherwise. a) Find the joint PDF of = 1 2 and
a) Since 1 and 2 are independent, their joint density is simply the product of their
1 1 1
marginal densities, so 1 2 (1 , 2 ) = ( 2 ) ( 2 ) = ( 2 whenever we have that
1 2 1 2 )
(1 , 2 ) [1, ) [1, ) and zero otherwise. We will use Theorem 6.3.6, which says
that if = (1 , 2 ) and = (1 , 2 ) and we can solve uniquely for 1 and 2 , then
1 1
we have (, ) = 1 2 (1 (, ), 2 (, ))||, where = [ 2
]. Here, we
2
have that = 1 so 1 = and = 1 2 so 2 = = , so we can calculate the
1
0 1 1
Jacobian as = [ 1 ] = . We can therefore find the joint density as
2
1 1 1 1
(, ) = 1 2 (, ) | | = ( 2
) () = 2 if 1 < < < . We can find this
( )
reduces to < < . Combining these gives the required bounds 1 < < < .
1 1 ()
b) We have () = 1 (, ) = 1 = [2 ()] = if 1 < < .
2 1 2
Question #18: Let and have joint density function (, ) = for 0 < < <
and (, ) = 0 otherwise. a) Then find the joint density function of = + and = .
b) Find the marginal density function of . c) Find the marginal density function of .
can find these bounds by substituting into the bounds 0 < < < with our solved
transformations, which gives 0 < < < or 0 < 2 < < or 0 < < 2 < .
/2 /2 /2 /2
b) We have that () = 0 (, ) = 0 = 0 = [ ]0 =
/2
[ ]0 = ( /2 1) = if 0 < < .
c) We have that () = 2 (, ) = 2 = 2 = [ ]
2 =
[ ]
2 = (0
2
) = if 0 < < . Note that we have omitted the steps
where the infinite limit of integration is replaced by a parameter and a limit to infinity
with that parameter is evaluated to show that it goes to zero.
Question #21: Let and have joint density (, ) = 2( + ) for 0 < < < 1 and
(, ) = 0 otherwise. a) Find the joint probability density function of = and = .
b) Find the marginal probability density function of the random variable .
a) We have that = so = and = so = = , so we can calculate the Jacobian
1 0 1
as = [
] = [ 1 ] = . The joint probability density function is thus
1 1
(, ) = (, ) | | = 2 ( + ) ( ) = 2 (1 + 2 ). The region is then found by
substituting to 0 < < < 1, so we have 0 < < < 1 or 0 < 2 < < < 1. This
b) The marginal probability density function is given by () = (, ) =
2
2 (1 + 2 ) = [2 ] = (2 2) (2 2) = 2 2 if 0 < < 1.
Question #25: Let 1 , 2 , 3,4 be independent random variables. Assume that 2, 3, 4 are
each distributed Poisson with parameter 5 and the random variable = 1 + 2 + 3 + 4
is distributed Poisson with parameter 25. a) What is the distribution of the random variable
1? b) What is the distribution of the random variable = 1 + 2?
a) We first note that while 1,2, 3, 4 are independent, they are not since only
2 , 3 , 4 ~(5) with 1 not being listed. Thus, we must use the unsimplified
formula 6.4.4, which says if 1 , , are independent random variables with moment
generating functions () and = 1 + + , then the moment generating
function of is () = [1 ()] [ ()]. We use the fact that if some ~(),
1)
then () = ( to solve this problem. If we let = 1 + 2 + 3 + 4 , we have
1)
that () = [1 ()][2 ()][3 ()][4 ()] = 25( since ~(25).
1) 1) 1) 1)
Substituting gives [1 ()][ 5( ][ 5( ][ 5( ] = 25( , which reduces to
1) 3 1) 25( 1) 25( 1) 1)
[1 ()][ 5( ] = 5( so that [1 ()] = 3 = = 10( . This
[ 5( 1) ] 15( 1)
1) 1) 1)
b) We have () = [1 ()][2 ()] = [ 10( ][ 5( ] = 15( , which is the
moment generating function of a poisson 15 random variable, so ~(15). We can
see a general pattern here; if ~() for = 1, , are independent random
variables and we define some = =1 for , then we have that ~().
Chapter #6 Functions of Random Variables
Question #17: Suppose that 1 and 2 denote a random sample of size 2 from a gamma
1
distribution such that ~ (2, 2). Find the PDF of a) = 1 + 2 and b) = 1.
2
1
a) We know that if some ~(, ), then () = () 1 if > 0. Since 1
1
since (2) = . We have the transformation = 1 + 2 and generate another
this integral has been omitted, but can be computed by two substitutions.
b) We have = 1 and generate = 1 so that 1 = and 2 = , which allows us to
2
1 0
calculate = det [ 1 (, ||
2 ] = 2 . Then we have ) = 1 2 (, ) =
+/ (+/)
1 1 1 1 1
2 = 2 3/2 2 if , > 0, so that the density of is given by
2 / 2
(+/)
1 1 1 1
() = (, ) = 2 0 2 = = (+1) if > 0. The
3/2
evaluation of this integral has been omitted, but can be computed by substitution.
Question #26: Let 1 and 2 be independent negative binomial random variables such that
1 ~(1 , ) and 2 ~(2 , ). a) Find the MGF and distribution of = 1 + 2.
We use Theorem 6.4.3, which says that if the random variables are independent
with respective MGFs (), then the MGF of the random variable that is their sum
is simply the product of their respective MGFs. Also, if some discrete random variable
~(, ), then () = (1 ) . Therefore, the moment generating function of
1 2 1 +2
is () = [1 ()][2 ()] = ( ) ( ) =( ) . This then allows to
1 1 1
c) We have that () = (1 ) = (1 ) (2 ), so the random variable () is the
2
Question #28: Let 1 and 2 be a random sample of size 2 from a continuous distribution
with PDF of the form () = 2 if 0 < < 1 and zero otherwise. a) Find the marginal
densities of 1 and 2 , the smallest and largest order statistics, b) find the joint probability
density function of 1 and 2 , and c) find the density of the sample range = 2 1 .
0 (, 0]
2 (0,1)
a) Since () = { , we know that () = { 2 (0,1) . Then
0
1 [1, )
from Theorem 6.5.2, we have that 1 (1 ) = (1 )[1 (1 )]1 so we can
calculate the smallest order statistic as 1 (1 ) = 2[21 ][1 12 ]21 = 41 413
whenever 1 (0,1). Similarly, ( ) = ( )[ ( )]1 so we can calculate the
largest order statistic as 2 (2 ) = 2[22 ][22 ]21 = 423 whenever 2 (0,1).
b) From Theorem 6.5.1, the joint probability density function of the order statistics is
(1 , , ) = ! (1 ) ( ). In this question, we have that 1 2 (1 , 2 ) =
2! (1 ) (2 ) = 2(21 )(22 ) = 81 2 whenever we have 0 < 1 < 2 < 1.
c) We first find the joint density of the smallest and largest order statistics in order to
make a transformation to get the marginal density of the sample range. From the
work we did above, we have that 1 2 (1 , 2 ) = 81 2 . We have the transformation
= 2 1 and generate = 1 , so we have 1 = and 2 = + which allows us to
1 0
calculate = [ ] = 1. The joint density of and is therefore (, ) =
1 1
1 2 (, + )|| = 8( + )(1) = 8 2 + 8 if 0 < < + < 1, which can also be
written as 0 < < 1 . The marginal density is thus () = (, ) =
1 8 1 8
0 (8 2 + 8) = [3 3 + 4 2 ] = 3 (1 )3 + 4(1 )2 if 0 < < 1.
0
Question #31: Consider a random sample of size from an exponential distribution such
that ~(1). Give the density of a) the smallest order statistic denoted by 1 , b) the
largest order statistic denoted by , c) the sample range of the order statistics = 1 .
(0, ) 0 (, 0]
a) Since ( ) = { , we have ( ) = { .
0 1
Then we have that 1 (1 ) = (1 )[1 (1 )]1 = 1 [1 (1 1 )]1 =
1 ( 1 )1 = 1 if 1 > 0 and zero otherwise.
c) Since the exponential distribution has the memoryless property, the difference of
= 1 will not be conditional on the value of 1 . This allows us to treat 1 = 0,
so that = 1 = 0 = . We then use the fact that the range of a set of
order statistics from an exponential distribution is the same as the largest order
statistic from a set of 1 order statistics. From above, we have that ( ) =
[1 ]1 , so substituting 1 gives () = ( 1)[1 ]2 .
a) Since ~(1), we know that the density is given by ( ) = for > 0 and
0 (, 0]
( ) = { . The system in the series fails whenever the
1
earliest component fails, which happens at time (1) = 1 , the first order statistic.
Thus, the probability density function of the time to failure is therefore given by
1 (1 ) = (1 )[1 (1 )]1 = 5 1 [ 1 ]4 = 5 51 whenever 1 > 0.
b) For the system in parallel, the system in the series fails whenever the last component
fails, which happens at time (5) = 5, the greatest order statistic. Thus, the density is
5 (5 ) = ( )[ ( )]1 = 5 5 [1 5 ]4 whenever 5 > 0.
Question #33: Consider a random sample of size from a geometric distribution such that
~(). Give the CDF of a) the minimum 1 , b) the smallest , c) the maximum .
Question #30: If ~(, ), then () = +1
if > 0 and zero otherwise. Consider
(1+ )
a random sample of size = 5 from a Pareto distribution where ~(1,2); that is,
suppose that 1 , , 5 are drawn from the given Pareto distribution above. a) Find the joint
PDF of the second and fourth order statistics given by 2 = (2) and 4 = (4) , and b) find the
joint PDF of the first three order statistics given by 1 = (2) , 2 = (2) and 3 = (2).
2
a) The CDF of the population is given by () = 0 () = 0 [(1+)3 ] , so that we
!
b) From Theorem 6.5.4, we have (1 , , ) = ()! [1 ( )] [ (1 ) ( )],
Question #1: Consider a random sample of size from a distribution with cumulative
1
distribution function () = 1 whenever 1 < and zero otherwise. That is, let the
random variables 1 , , be ~ from the distribution with CDF (). a) Derive the CDF
of the smallest order statistic given by (1) = 1: , b) find the limiting distribution of 1: ;
that is, if () denotes the order statistic from above, find lim (), c) find the limiting
distribution of 1: ; that is, find the CDF of (1) and its limit as .
lim [1 (1/) ] 1 0 1
b) We have that lim () = { ={ = (),
lim 0 < 1 1 > 1
1 1
() = (1: ) = (1: ) = 1 (1: > ) =
c) As before, we have 1:
1
1 1
1 ( > ) = 1 [1 (1
1 )] = 1 () whenever 1. Therefore, it is
clear that the limiting distribution of this sequence of random variables is given by
1 1/ 1
() = {
lim 1: = () since there is no dependence on .
0 < 1
Question #2: Consider a random sample of size from a distribution with CDF given by
1
() = 1+ for all . Find the limiting distribution of a) : and b) : ().
1
a) We have : () = (: ) = ( ) = (1+ ) for all . Since
1
lim [(1+ ) ] = 0, we conclude that : does not have a limiting distribution.
b) We calculate that :() () = (: () ) = (: + ()) =
1 1 1
: ( + ()) = (1+ (+ ()) ) = (1+ ()) = (
) . Evaluating this
1+
1
limit gives lim [: ( + ())] = lim [(
) ] = for all .
1+
1
b) We have : () = (: ) = ( ) = (1 2 ) whenever > 1. Thus,
1
1
(1 2 ) > 1 lim [(1 2 ) ] > 1
: () = { so lim : () = { = 0.
0 1 lim [0] 1
1
c) We compute that 1 () = ( : ) = (: ) = : () =
:
1 1 1
(1 ( ) = (1 2 ) whenever > 1 or > . We can therefore compute
)2
1 1 1
lim [(1 2 ) ] > 2
= { > 0 .
the limit as lim 1 () = { 1
:
0 0
lim [0]
Question #5: Suppose that ~(0,1) and that the are all independent. Use moment
1
=1( + )
generating functions to find the limiting distribution of = as .
1 1 1
=1( + ) (
=1 )+(=1( )) (
=1 )+
=1 1
We have =
=
=
= + , so the MGF is
1
() = [1 ()] [2 ()] = [1 ()] [ ( )] since is the sum of two parts so
we can multiply their respective MGFs. The MGF of a standard normal random
2 /2
variable with = 0 and 2 = 1 is given by () = , which allows us to calculate
2 2 1
( ) /2
that 1 () = = . Also, we have that ( ) = so combining these
2
1 2
2
gives () = [1 ()] [ ( )] = [ ] [ ] = [ 2 ] [ ]. Then we can use
2
2 2
Theorem 7.3.1 to calculate lim () = lim [ 2 ] [ ] = 2 = (), which we
know is the MGF of a standard normal, so the limiting distribution is ~(0,1). Note
that this is also a direct consequence of the Central Limit Theorem.
Question #9: Let 1 , 2 , , 100 be a random sample of size = 100 from an exponential
distribution such that each ~(1) and let = 1 + 2 + + 100. a) Give a normal
approximation for the probability ( > 110), and b) if = 100 is the sample mean, then
() = (100 100
=1 ) = =1 ( ) = 100 so () = 100 = 10. We can
(1 < < 2) = (2) (1) = 0.9772 0.8413 = 0.1359. Here, we have used the
fact that = 1 and = 1 which come from the population distribution ~(1).
Question #11: Let ~(0,1) where 1 , 2 , , 20 are all independent. Find a normal
approximation for the probability (20
=1 12).
(20 20 20
=1 ) = =1 ( ) = 5/3, so that (=1 ) = 5/3. This allows us to
20
=1 10 1210
find (20
=1 12) = ( ) ( 1.55) = (1.55) = 0.9394.
5/3 5/3
Chapter #8 Statistics and Sampling Distributions
Question #1: Let denote the weight in pounds of a single bag of feed where ~(101,4).
What is the probability that 20 bags will weigh at least 2,000 pounds?
Let = 20 20 20
=1 where ~(101,4). We have that () = (=1 ) = =1 ( ) =
20
( 45) = ( 2.24) = 1 (2.24) = 0.987, where ~(0,1).
Question #2: Let denote the diameter of a shaft and the diameter of a bearing, where
both and are independent and ~(1,0.0004) and ~(1.01,0.0009). a) If a shaft and
bearing are selected at random, what is the probability that the shaft diameter will exceed
the bearing diameter? b) Now assume equal variances (2 = 2 = 2 ) such that we have
~(1, 2 ) and ~(1.01, 2 ). Find the value of that will yield a probability of
noninterference of 0.95 (which means the shaft diameter exceeds the bearing diameter).
Since only the critical value = 1.645 ensures that (1.645) = 0.05, we must
0.01
solve = 1.645 = 0.004. But since we must have 0, no such exists.
2
Question #3: Let 1 , , be a random sample of size where they are ~ such that
~(, 2 ) and define = =1 and = =1 2 . a) Find a statistic that is a function of
and and unbiased for the parameter = 2 5 2 . b) Find a statistic that is unbiased
for = 2 + 2 . c) If is a constant and = 1 if and zero otherwise, find a statistic
2
()/ 1
that is a function of 1 , , and is unbiased for () = ( )= 2 .
2
1 1 1
a) We first find an estimator for = () = ( =1 ) = (=1 ) = () =
1 1
and then for 2 = ( 2 ) = ( =1( )2 ) = ( [=1(2 ) 2 ]) =
1 1
1 1 1 1 2
(=1(2 ) 2 ) = 1 (=1 2 2 ) = 1 [ ( =1 ) ] =
1
2 2
1 (
=1 ) 1 (
=1 ) 1 1 1 2
[ ] = 1 [ ] = 1 [ 2 ] = 1 [ ].
1 2
1 2 2 5 1
We thus have = 2 5 2 = 2 [ ] 5 [1 ( )] = 1 ( 2 ),
1 1 2
b) Since we found that = () = ( =1 ), then we have 2 = [ ( =1 )] =
1 2 1 1 1 1
[( =1 ) ( =1 )] = [2 (=1 )2 2 (=1 )] = [2 2
1 2 2 2 2 1 2
2 ] = [ 2 ]= . We previously found that 2 = 1 [ ], so
2 2
2 2 (1) 2
combining these we find that = 2 + 2 = 2 + 2 = 2 + 2 =
(1) 1 2 2 1 2 2 2 2
[1 ( )] + 2 = ( ) + 2 = 2 + 2 = , which is an
c) We have ( = 1) = ( ) = ( ) = ( ) = ( ) = ()
and ( ) = 1 ( = 1) + 0 ( = 0) = ( = 1) = ( ) = (). Then,
1 1 1 1
() = ( =1 ) = (=1 ) = =1 ( ) = ( ) = ( ) = (),
Since 1 and 2 are independent normal random variables, we know that their joint
( )2 ( )2
1 1 2 1 2
density function is 1 2 (1 , 2 ) = 1 (1 )2 (2 ) = [ 2 ][ 22 ] =
2 2
1
1 [(1 )2 +(2 )2 ]
22 . We have the transformation 1 = 1 + 2 and 2 = 1 2 ,
22
1 +2 1 2
which can be solved to obtain 1 = and 2 = . This allows us to calculate
2 2
1/2 1/2 1
the Jacobian = [ ] = 2, so we can compute the joint density
1/2 1/2
1 + 2 2
1 +2 1 2 1 1 (( 1 2 ) +( 1 2 ) )
1 2 (1 , 2 ) = 1 2 ( , ) || = 2 [22 22 2 2 ]. After
2 2
1 1
1 [ 2]2 [ ]2
simplifying this expression, we have 1 2 (1 , 2 ) = 42 42 1 42 2 . Since
the marginal densities can be separated, this shows that 1 and 2 are independent
and normally distributed. Moreover, we see that 1 ~(2, 2 2 ) and 2 ~(0,2 2 ).
[ 2 (2) 6] = 0.95. Note that we have used Corollary 8.3.4 to transform the
2
question into one using the chi-square distribution, since =1 ( ) ~ 2 (). This is
2
because ~(, 2 ) implies that
~(0,1) so (
) ~ 2 (1) and that the sum of
Question #8: Suppose that and are independent and distributed ~ 2 () and ~ 2 ().
Is the random variable = distributed chi-square if we have > ?
No. The random variable = can clearly take on negative values, whereas as a
random variable following the chi-square distribution must be positive.
We know that if ~(0,1) and ~ 2 () are independent random variables, then the
distribution of = is Students t distribution. But then we can square this to
/
2 2 /1
produce 2 = / = , which makes it clear that 2 ~(1, ). The reason for this is
/
that we know if some ~(0,1), then 2 ~ 2 (1). Moreover, we are already given that
~ 2 (). Combining these results with the fact that if some 1 ~ 2 (1 ) and 2 ~ 2 (2 )
/
are independent, then the random variable = 1 /1 ~(1 , 2 ). Therefore, 2 follows
2 2
a) 1 2 ~( , 2 + 2 ) (0,2 2 )
b) 2 + 23 ~( + 2, 2 + 4 2 ) (3, 5 2 )
1 2 1 2
d) 2
~( 1) since 1 2 ~(0,2 2 ) implies that 2
~(0,1) and dividing this
by the sample standard deviation of the Z sample makes it clear that ~( 1).
( ) (1)2
e) ~( 1) since = / ~(0,1), = ~ 2 ( 1) and we can write
2
( )
= = ~( 1) by the definition of the t distribution (see above).
/(1)
1 1 1 1
h) ~(1) since = 22 ~ 2 (1) and we can write = = ~(1).
22 22 22 /1 /1
12 12 1 /1
i) ~(1,1) since 1 = 12 ~ 2 (1), 2 = 22 ~ 2 (1) and we have = = ~(1,1).
22 22 2 /1
1
j) ~(1,0) since we can generate the joint transformation = 1 and = 2 ,
2 2
1
calculate the joint density (, ) and integrate out to find () = (2 +1).
k) the distribution is unknown.
( )
l) ~() since = / ~(0,1) and = =1 2 ~ 2 () and we can write the
2
=1
)
(
( )
expression =
= ~() by the definition of the distribution.
/
2
=1
=1
2
( )2 ( )2
m) =1 + =1( )2 ~ 2 ( + 1) since =1 ~ 2 () by Corollary
2 2
(1) 2
8.3.4 and =1( )2 = ( 1)2 = 12 ~ 2 ( 1) by Theorem 8.3.6. Thus,
we have the sum of two chi-square random variables so we sum the parameters.
1 1 1 2
n) + =1 ~ (2 , 2 + ) since ~ (, ) implies that the random variable
2
1 1 1 1 21
= =1 (2 ) ~ (=1 (2 ) , =1 (2 ) 2 ) (2 , 2 ). Also, we have
2
1 1
=1 = ~ (0, ) so the distribution of their sum is normal and we sum their
1 1 1
respective means and variances to conclude that 2 + =1 ~ (2 , 2 + ).
2
o) 2 ~ 2 (1) since ~(0,1), so it must be that () = 2 ~ 2 (1).
(1) 2
=1( )
p)
(1)2 =1( )2
~( 1, 1) since we can simplify the random variable as
12 2
=1( )
(1) 2
=1( ) 12 2
(1) =1( )2
2
= 1
2
= 2 2 and 12 2 ~ 2 ( 1) and 2 2 ~ 2 ( 1). We
2
=1( )
1
thus have the ratio of two chi-square random variables over their respective degrees
of freedom, which we know follows the F distribution.
Question #18: Assume that ~(0,1), 1 ~ 2 (5) and 2 ~ 2 (9) are all independent. Then
compute the probability that a) (1 + 2 < 8.6), b) ( < 2.015), c) ( > 0.6112 ),
1 /5
d) (1 < 1.45) and e) find the value of such that ( +
1
< ) = 0.9.
2 1 2
= ~(5), so we can compute ( < 2.015) = 0.95 using the t-table.
1 /5 1 /5
c) We wish to compute ( > 0.6112 ) = ( > 0.611) = ( 3 > 0.611(3)) =
2 2
( > 1.833) = 0.05, from using the t-table since we know that ~(9).
2 /9 2 /9
/5 9 /5
d) We wish to compute (1 < 1.45) = (1 /9 < 1.45 (5)) = (1 /9 < 2.61). We know
2 2 2
/5
the F-table to compute the desired probability as (1 /9 < 2.61) = 0.9.
2
1 1 +2 1 1
e) We wish to compute sich that ( + < ) = ( > ) = (1 + 2 > ) =
1 2 1 1
1 /9 5 1 /9
(2 > 1) = (2/5 > 9 ( 1)) = 0.9. But we know that = 2/5 ~(9,5) so we
1 1 1
can use tables to find that ( > 0.383) = 0.9. This means that we must solve the
5 1 1 9 1
equation 9 ( 1) = 0.383 = 5 (0.383) + 1 = 9 = 0.592.
(0.383)+1
5
1 1
Question #19: Suppose that ~(1). a) Show that the CDF of is () = 2 + ()
1
and b) show that the 100 percentile is given by (1) = [ ( 2)].
+1 +1
( )
2 2
a) If some ~(), then its density is given by () =
2
(1 + ) . When = 1,
( )
2
(1) 1 1 1
we have () = 1 (1 + 2 )1 = since (1) = 1 and (2) = . We thus
( ) 1+ 2
2
1 1
have that () = 1+ 2 when = 1, which is the density of a Cauchy random variable.
1 1
To find the cumulative distribution, we simply compute () = 1+ 2 =
1 1 1 1
[()] = (() ( )) = () + .
2 2
b) The 100 percentile is the value of such that () = . From the work above,
1 1 1 1
we have () + 2 = () = ( 2) = [( 2) ]. This
1
proves that the 100 percentile is given by (1) = [( 2) ].
Chapter #8 Statistics and Sampling Distributions
(+)
Since ~(, ), its PDF is () = ()() 1 (1 )1 whenever 0 < < 1
and > 0, > 0. Then using the definition of expected value, we can compute
(+) 1 (+) (+)()
( ) = () = ()() 0 1 (1 )1 = ()() =
(++)
(+)(+) (+) 1 1
since we have (1 )1 = 1 so we can solve for the
()(++) ()() 0
()() 1
integral to conclude that = 0 1 (1 )1 . In this case, we are solving
(+)
1 1 (+)()
that 0 1 (1 )1 = 0 +1 (1 )1 = . Therefore, all of
(++)
the moments of the beta distribution for some fixed > 0 can be written in terms of
the gamma function, which can be evaluated numerically.
Question #24: Suppose that ~ 2 (). Use moment generating functions to find the limiting
distribution of the transformed random variable as .
2
This result follows directly from the Central Limit Theorem. If we let = =1
where ~2 (1) for = 1, , , then ~ 2 () so that ( ) = , ( ) = 2 and
( )
( ) = 2. Therefore, = ~(0,1) as . We will now prove
( ) 2
this result using moment generating functions. By the definition of MGFs, we have
2 2
( )
[ ] () = [ 2 ] = [ 2
2 ]= [ 2 ]= ( )=
2
2
2
2
2
2
2
2
(1 ) = (1 ) . In order to evaluate lim [ ] (), we first
2 2
take logarithms and then exponentiate the result. This implies that ln [M[Y ] (t)] =
2
2 2
2 2 2 2 2 2
ln [ (1 ) ] = ln [e ] + ln [(1 ) ] = 2 ln (1 ). From
2 3 2
here, we use the Taylor series ln(1 ) = for = to evaluate
2 3
2 2
the limit, which then gives lim ln [M[Y ] (t)] = lim [ 2 ln (1 )]
2
3
2 2 2 3 22 2 2 2 3 2
lim [ ( 3 )] = lim [ + + + +] =
2 2 3
3 2
2 3 2 2
lim [ 2 + +] = + 0 + . This result therefore implies that the limit
3 2
a random variable that follows a standard normal distribution. This proves that the
random variable ~(0,1) as , just as is guaranteed by the CLT.
2
Chapter #9 Point Estimation
Question #1: Assume that 1 , , are independent and identically distributed with
common density (; ), where > 0 is an unknown parameter. Find the method of
moments estimator (MME) of if the density function is a) (; ) = 1 for 0 < < 1,
b) (; ) = ( + 1) 2 whenever > 1, and c) (; ) = 2 whenever > 0.
1
a) We begin by computing the first population moment, so () = 0 (; ) =
1 1 1
0 ( 1 ) = 0 = +1 [ +1 ]0 = +1 (1 0) = +1. We therefore have
() = +1. Next, we equate the first population moment with the first sample
1
moment, which gives 1 = 1 +1 = =1 +1 = . Finally, we replace
by and solve the equation +1 = for , which implies that = 1.
b) Just as above, we first compute () = 1 (; ) = 1 [( + 1) 2 ] =
+1 +1 +1 +1
( + 1) 1 1 = [ ]1 = [0 1] = . Thus, we have () =
+1 1 +1 1
which means that 1 = 1 = =1 = and = 1.
2
c) We have () = 0 (; ) = 0 [ 2 ] = 2 0 2 = =
after doing integration by parts. We can also find this directly by noting that the
1
density (, ) = 2 suggests that ~ ( , 2). This then implies that
1 2 2 1 2
() = = 2 = . We therefore set 1 = 1 such that = =1 or = , and
2
then solve for the method of moments estimator, which is given by = .
Question #2: Assume that 1 , , are independent and identically distributed. Find the
method of moments estimator (MME) of the unknown parameters if the random sample
1
comes from a) ~(3, ), b) ~(2, ), c) ~ (, 2), and d) ~(, ).
3
a) Since ~(3, ), we know that () = = . Equating this with the first sample
3 3
moment gives 1 = 1 = , so the estimator is = .
1 1 1
c) Since ~ (, 2), we know that () = (1 + ) = (1 + 1/2) = (3) =
(3 1)! = 2. Thus, we have 1 = 1 2 = , so the estimator is = 2 .
2 2
d) Since ~(, ), we have 1 = 1 and 2 = 2 + 12 = (2)(1)2 + (1)2. This
2 2 1
means that 1 = 1 = 1 = and 2 = (2)(1)2 + (1)2 = 2 = =1 2 . We
must solve for the unknown parameters and in terms of the two sample moments
1
and =1 2 . From the first equation, we can solve to find = ( 1) and
2 (1)2 2 (1)2 1
substitute into the second equation to find (2)(1)2
+ (1)2
= =1 2
2
2 1 1 22
(2)
+ 2 = =1 2 2 (2 + 1) = =1 2 2 = =1
2
. But this means
2 (2 2) = ( 2) =1 2 2 2 2 2 = =1 2 2 =1 2 , so that
2 2 =1 2 = 2 2 2 =1 2 (2 2 =1 2 ) = 2 2 2 =1 2 .
2 2 2 =1
2
Finally, we divide through to find = 2 . Plugging in to the other equation
2 2 =1
2 2 2 2
implies that = ( 1) = ( 2 2 =1 2 1) , so that the two method of
=1
2 2 2 =1
2
2 2 2 2
moments estimators are =
2 =1
2 2 and = ( 2 2 =1 2 1) .
=1
Question #3: Assume that 1 , , are independent and identically distributed with
common density (; ), where > 0 is an unknown parameter. Find the maximum
likelihood estimator (MLE) for when the PDF is a) (; ) = 1 whenever 0 < < 1,
b) (; ) = ( + 1) 2 whenever > 1, and c) (, ) = 2 whenever > 0.
a) We first find the likelihood function based on the joint density of 1 , , , which is
() = (1 ; ) ( ; ) = =1 ( ; ) = =1 1 = (1 )1 . Next, we
construct the log likelihood function, since it is easier to differentiate and achieves a
maximum at the same point as the likelihood function. This gives ln[()] =
ln[ (1 )1 ] = ln() + ( 1)[ln(1 ) + + ln( )], which we differentiate
so ln[()] = [ ln() + ( 1) =1 ln( )] = + =1 ln( ). We then solve
for the value of which makes the derivative equal zero, so + =1 ln( ) = 0 =
. Since it is clear that the second derivative of ln[()] is negative, we have
=1 ln( )
found that the maximum likelihood estimator is = . (Note that we
=1 ln( )
=1
=1
=1
=1
=0 = (1 ) =1 = ( =1 )
1 1
=1
=1 =1 = =1 =1 = = = . Since the
b) Since (; ) = (1 )1 , we have () = =1 ( ; ) = =1 (1 ) 1 =
(1 )[=1 ] and then the log likelihood function becomes ln[()] =
ln[ (1 )[=1 ] ] = ln() + {[=1 ] }ln(1 ). Differentiating gives
[
=1 ]
ln[()] = [ ln() + {[=1 ] } ln(1 )] = . Equating this
1
[
=1 ] [
=1 ]
with zero implies =0= (1 ) = [=1 ]
1 1
1 1
= =1 = =1 = = 1 = . Since the second
=1
=1
1
derivative will be negative, we have found that = .
(1)!
c) Since ~(3, ), we have (; ) = (1
31
)3 (1 )3 = 2(3)! 3 (1 )3 =
(1)(2) 1
3 (1 )3 = 2 ( 2 3 + 2)3 (1 )3. This implies that the
2
1
likelihood function () = =1 ( ; ) = =1 [2 (2 3 + 2)3 (1 ) 3 ] =
2 (2 3 + 2) 3 (1 )[=1 ]3 , so the log likelihood function ln[()] =
ln[2 (2 3 + 2) 3 (1 )[=1 ]3 ] = ln(2) + ln(2 3 + 2) +
3 ln() + {[=1 ] 3} ln(1 ). Differentiating this then gives ln[()] =
3
[ ln(2) + ln(2 3 + 2) + 3 ln() + {[=1 ] 3} ln(1 )] =
[
=1 ]3 3 3
= 0 = . Therefore, we have that = .
1
1 1
d) Since ~(, 2), we have (; ) = 2 (2) 21 = 2 . This means
1
1 1
() = =1 ( ; ) = =1 [2 ] = 2 (1 ) =1 so that ln[()] =
1
1 1
ln [2 (1 ) =1 ] = 2 ln() + =1 ln( ) =1 . Differentiating
1 2 1
gives ln[()] = [2 ln() + =1 ln( ) =1 ] = + 2 =1 . Then
1 2 1 2
=1
we solve =1 = 0 2 =1 = =1 = 2 2 = = 2.
2 2
Since the second derivative will be zero, we have found that = 2 .
1 1
1 1 1 1
e) Since ~ (, 2), we have (; ) = 2 2
= 2 2
. Thus, we have
1
1 1 1
1 1 1
2
() = =1 ( ; ) = =1 [ 2
] = (1 2
2 ) =1 so that the
2 2 2
1
1 11
1 2
log of the likelihood function is ln[()] = ln [ (1 2
2 ) =1 ] =
2 2
1 1
1
ln(2) 2 ln() + =1 2 =1 2 . Differentiating this gives ln[()] =
1 1
1
=1
[ ln(2) 2 ln() + =1 2 =1 2 ] = + 3 . Setting this equal to
2
22
=1
=1 3
zero and solving implies 2 + 3 =0 3 = 2 2 =1 = 2 2
22 22
2 2
=1
=1
= =[ ] . Therefore, we have found = [ =1 ] .
f) Since ~(1, ), we have (; ) = (1+)+1 so the likelihood function is () =
Question #7: Let 1 , , be a random sample from ~(). Find the Maximum
1 1
Likelihood Estimator (MLE) for a) () = , b) () = , and c) ( > ) = (1 )
2
where {1,2, }. Do it both ways for each part to verify the Invariance Property.
=1
=1
Setting equal to zero and solving for gives =0=
1 1
(1 ) = =1 = =1 = =1 . This then
1
implies that = = . Since the second derivative will be negative, we have
=1
1
found that = . By the Invariance Property of the Maximum Likelihood
1 1 1
Estimator, we have that ( ) = = = as the MLE for () = () = .
1/
1 1 1 11/
b) Since = and () = () = , then ( ) = = (1/)2 = ( 1), by
2 2
1
c) Since = and () = (1 ) , then ( ) = (1 ) = (1 1/) , by the
Question #12: Let 1 , , be a random sample from ~(, 2 ). Find the Maximum
Likelihood Estimator (MLE) for a) the parameters and 2 , and b) (, 2 ) = ().
1
1 1 (ln )2
a) We have that the density function of is (; , 2 ) = 22 22 , so that
to both parameters and set the resulting expressions equal to zero so we can
1
simultaneously solve for the parameters, so ln[(, 2 )] = 2 =1(ln ) = 0
1
and ln[(, 2 )] = 22 + 24 =1(ln )2 = 0. The first equation implies
2
1
=1(ln )
(ln ) = 0 =1(ln ) = 0 =1(ln ) = 0 =
2 =1
1 1
and the second 22 + 24 =1(ln )2 = 0 24 =1(ln )2 = 22
1 1
=1(ln )2 = 2 = =1(ln )2. Thus, we have that the maximum
2
1 1
likelihood estimators are = =1(ln ) and
2
= =1(ln )2.
b) We know that ~(, 2 ) if and only if = ln() ~(, 2 ). But = ln() if and
2 12 2
only if = implies that () = ( ) = (1) = (1)+ 2 = + 2 . By the
Invariance Property of the Maximum Likelihood Estimator, we can conclude that
1 2 2
2 )
( , 2 ) = ( , = +2 is the MLE for (, 2 ) = () = + 2 .
is also an unbiased estimator for the parameter ; c) which one has a smaller variance?
must therefore compute the mean of the smallest and largest order statistics, which
we can do by first finding their density functions. We first note that since
1 1
~( 1, + 1), then () = (+1)(1) = 2 whenever ( 1, + 1) and
1 1 (1)
() = 1 2 = 2 []1 = whenever ( 1, + 1). Then the
2
1 1 1
c) We have that () = ( =1 ) = 2 (=1 ) = 2 =1 ( ) =
1 [(+1)(1)]2 1 4 1 1 1 1
=1 = 2 =1 12 = 2 =1 3 = 2 3 = 3. Similarly, we can calculate
2 12
(1) +() 1
that () = ( ) = 4 ((1) + () ).
2
Question #21: Let 1 , , be a random sample from ~(1, ). a) Find the Cramer-Rao
lower bound for the variances of all unbiased estimators of ; b) Find the Cramer-Rao lower
bound for the variances of unbiased estimators of (1 ); c) Find a UMVUE of .
2
[ ()]
a) We have that = 2 , so we compute each of these parts individually.
[( ln (;)) ]
2 2 2 2+2 2
( ln (; )) = ((1)) = . Finally, we compute [( ln (; )) ] =
2 (1)2
2 2+2 1 1
[ ] = 2 (1)2 ( 2 2 + 2 ) = 2 (1)2 [( 2 ) 2() + 2 ] =
2 (1)2
1 1
[((1 ) + 2 ) 2() + 2 ] = [ 2 + 2 22 + 2 ] =
2 (1)2 2 (1)2
1 (1) 1 (1)
[ 2 ] = = (1). Thus, we have found that = .
2 (1)2 2 (1)2
1 1
c) Since for the estimator = , we have ( ) = () = ( =1 ) = (=1 ) =
1 1 1 1
=1 ( ) = =1 = = and then ( ) = () = ( =1 ) =
1 1 1 1 (1)
(=1 ) = 2 =1 ( ) = 2 =1 (1 ) = 2 (1 ) = =
2
2 2 1 1
expression [( ln (; )) ] = [2 ln (; )] and ( 9) = 9, we can
9
conclude that the Cramer-Rao Lower Bound is = . This then means that
9
() for any unbiased estimator of the parameter in ~(, 9).
1 1 1
b) We first verify that ( ) = () = ( =1 ) = (=1 ) = =1 ( ) =
1 1
= = , so = is an unbiased estimator for . Then we compute
=1
1 1 1
( ) = () = ( =1 ) = 2 (=1 ) = 2 =1 ( ) =
1 1 9
=1 9 = 9 = = , so that = a UMVUE for the parameter .
2 2
a) We first find by noting that since ~(0, ), then its density function is () =
1 2 1 2
1 1
2 so the likelihood function is () = =1 (; ) = =1 2 =
2 2
1
2 1
(2) 2 2 =1 and then ln[()] = 2 ln(2) 2 =1 2 . Next, we
2 1 1
differentiate so that ln[()] = 4 + 22 =1 2 = 0 22 =1 2 = 2
1
=1 2 = = =1 2 . Since the second derivative is negative, we have
1 1
= =1 2 . We verify unbiasedness by computing ( ) = ( =1 2 ) =
1 1 1 1 1
(=1 2 ) = =1 (2 ) = =1( + 02 ) = =1 = = .
Question #31: Let and be the MLE and MME estimators for the parameter , where
1 , , is a random sample of size from a Uniform distribution such that ~(0, ).
Show that a) is MSE consistent, and b) is MSE consistent.
a) We first derive the MLE for . Since ~(0, ), we know that the density
1
function is (; ) = for (0, ). This allows us to construct the likelihood
and second moments of the largest order statistic. But we already know that () =
1 1
(; )(; )1 = () = , so we can calculate (: ) = 0 () =
1 +1 +1
0 = 0 = [ +1 ] = ( +1 0) = +1 and 2 )
(: =
0
1 +2 +2
0 2 () = 0 2
= 0 +1 = [ +2 ] = ( +2 ) = +2 2 .
0
2
Thus, we have lim [(: ) 2(: ) + 2 ] = lim [+2 2 2 +1 + 2 ] =
2
lim [ 2 +1 2 + 2 ] = 2 2 2 + 2 = 0. That this limit is zero verifies that
+2
b) We first derive the MME for . Since ~(0, ), we know that () = 2 so we
1
can equate 1 = 1 = =1 = = 2. This means that = 2.
2 2
Next, we show that this estimator is MSE consistent, which means verifying that
lim [2 ]2 = 0. But we have lim [2 ]2 = lim [4 2 4 + 2 ] =
1
lim [4( 2 ) 4() + 2 ]. We therefore compute () = ( =1 ) =
1 1 1 1
(=1 ) = =1 ( ) = =1 2 = 2 = and ( 2 ) = () + ()2 =
2
1 2 1 2 1 2 2 1 2 2
( =1 ) + (2) = 2 =1 ( ) + = 2 =1 12 + = 2 12 + =
4 4 4
2 2 (3+1) 2
+ = . Thus, we can compute that lim [4( 2 ) 4() + 2 ] =
12 4 12
(3+1)2 (3+1)2
lim [4 4 + 2 ] = lim [ 2 2 + 2 ] = 2 2 2 + 2 = 0. That
12 2 3
this limit is zero verifies that the MME = 2 is mean square error (MSE) consistent.
Question #29: Let 1 , , be a random sample of size from a Bernoulli distribution such
that ~(1, ). For a Uniform prior density ~(0,1) and a squared error loss
function (; ) = ( )2 , a) find the posterior distribution of the unknown parameter ,
b) find the Bayes estimator of , and c) find the Bayes risk for the Bayes estimator of above.
(1 ,, ;)()
a) We have that the posterior density is given by P| () = , where
(1 ,, ;)()
(1 , , ; ) = =1 ( ; ) = =1 (1 )1 = =1 (1 )=1 since
the random variables are independent and identically distributed and () = 1 since
1
the prior density is uniform. We then express 0 =1 (1 )=1 in terms of
the beta distribution. Recall that if ~(, ), then its density is (; , ) =
1 ()()
1 (1 )1 where (, ) = . Next, we must define = =1 and
(,) (+)
1
= =1 , so we can write 0 =1 (1 )=1 = ( + 1, + 1) =
=1 (1)=1
(=1 + 1, =1 + 1). Thus, we have P| () = ( =
=1 +1,=1 +1)
1
(1 ) , which verifies that the random variable given by
(+1,+1)
P|~(=1 + 1, =1 + 1) ( + 1, + 1).
b) For some random variable ~(, ), we know that () = +. Moreover,
Theorem 9.5.2 states that when we have a squared error loss function, the Bayes
estimator is simply the expected value of the posterior distribution. This implies that
=1 +1
=1 +1
the Bayes estimator of is given by = = .
=1 +1+=1 +1 +2
=1 +1
c) The risk function in this case is () = [( )2 ], where = is the Bayes
+2
Estimator derived above. We would therefore substitute for in the risk function,
1
evaluate the expected value of that expression and then compute 0 [( )2 ] .
Question #34: Consider a random sample of size from a distribution with discrete
probability mass function (; ) = (1 ) for {0,1,2, }. a) Find the MLE of the
1
unknown parameter . b) Find the MLE of = . c) Find the CRLB for variances of all
1
unbiased estimators of the parameter above. d) Is the MLE of = a UMVUE? e) Is the
1
MLE of = also MSE consistent? f) Compute the asymptotic distribution of the MLE of
1
= . g) If we have the estimator = +1 , then find the risk functions of both and
()2
using the loss function given by (; ) = .
2 +
a) We have () = =1 ( ; ) = =1(1 ) = (1 )=1 , so that
=1
ln[()] = ln() + =1 ln(1 ). Then we have that ln[()] = .
1
1
Setting this equal to zero and solving for gives the estimator = 1+.
1
b) By the Invariance Property, we have that the estimator is = = .
1 1 1 1
c) Since = () = = 1, then () = 2 and [ ()]2 = 4. Then since
2
negative of the expected value of this second derivative so that [2 ln (; )] =
1 1 1 1 (1)2 +(1) 12+2 +2 1 1
+ (1)2 () = 2 + (1)2 = = = 2 (1)2 = 2 (1).
2 2 (1)2 2 (1)2
2 (1) 1
These results imply that = = .
4 2
1 1 1 1 1
d) We first verify that ( ) = () = ( =1 ) = =1 ( ) = = ,
1
so that the MLE is an unbiased estimator of = . Next, we compute ( ) =
1 1 1 1 1
() = ( =1 ) = 2 =1 ( ) = 2 2 = 2 = , which verifies
1
that = is the UMVUE for the parameter = .
1 2
e) To verify that = is MSE consistent, we must show that lim [ ] = 0.
1 2(1) (1) 2 2
But we can see that we have lim [ ] = lim [ 2 + 2 ] =
2(1) (1)2
lim [( 2 ) () + ], so we must compute the expectation of both the
2
1
mean and the mean squared. However, we already know that () = = since
1 1 2
is unbiased. Then ( 2 ) = () + ()2 = ( =1 ) + ( ) =
1 (1)2 1 1 (1)2 1 1 (1)2 1 (1)2
=1 ( ) + = 2 =1 + = 2 + = + .
2 2 2 2 2 2 2 2
f) We use Definition 9.4.5, which states that for large values of , the MLE estimator is
1
distributed normal with mean = and variance . Since we previously found
1 1 1
that = 2
, we can conclude that ~ ( , 2 ).
g) Definition 9.5.2 states that the risk function is the expected loss () = [(; )].
()2 2 2+2
In this case, the loss function is (; ) = = . Therefore, for the
2 + 2 +
2 2 +2 1
estimator we compute () = [ ] = 2 + [( 2 ) 2() + 2 ] =
2 +
1 1 (1)2 1 1 1 2
[ 2 + 2 + 2 ] = 2 + [ 2 + 2 2 2 + 2 ] = (2 +) =
2 + 2
(1)/ 1 1
(+1)
= [(1)/+1] = [1/]2 = = . Similarly, for the estimator = +1 we
2
( ) 2( )+2 +
can compute () = [ +1 +1
] = = (+1)(+1)2 .
2 +
Question #36: Let 1 , , be a random sample of size from a Normal distribution such
that each ~(0, ). Find the asymptotic distribution of the MLE of the parameter .
1
From the previous assignment, we know that = =1 2 . We then use
Definition 9.4.5, which states that for large values of , the MLE estimator is
distributed normal with mean and variance . That is, we have that
~(, ). This means that we must compute the Cramer-Rao Lower Bound.
Since () = , then () = 1 and [ ()]2 = 1. Next, since we previously found that
1 2
1 1 1
(; ) = 2 , then we have ln (; ) = 2 ln(2) 2 2 so that
2
1 2 2 1 2 2 1 2
ln (; ) = 2 + 22. We then find 2 ln (; ) = 22 23 = 22 3 and take
1 2 1 1
the negative of its expected value to obtain [22 3 ] = [22 3 ( 2 )] =
1 1 1 1 1 22
3
( + 02 ) = 2 22 = 22. This implies that = . Combining these
22
2 2
facts reveals that the asymptotic distribution of the MLE is ~ (, ). We can
transform this to get a standard normal distribution by noting that the random
variable ~(0,1) or large values of . We could further reduce this by
2/
multiplying through by the constant so that ~(0, 2 ).
2/
Chapter #10 Sufficiency and Completeness
(1 , , ; , ) = =1 ( ; , ) = =1 (
) (1 ) 1{ = 0, , } =
[=1 (
)] =1 (1 )=1 =1 1{ = 0, , }. If = [=1 (
)], then we
=1 =1
have that =1 =1 =1 1{ = 0, , } =
1{ = 0, , } =
=1
=1
( ) =1 1{ = 0, , }. But then if we define = =1 , we have that
(1 , , ; , ) = ( ) =1 1{ = 0, , } = (; , )(1 , , ). Since
(; , ) = ( ) =1 does not depend on 1 , , except through = =1
Question #7: Let 1 , , be independent and each ~( , ). This means that each
Find a sufficient statistic for the unknown parameter using the Factorization Criterion.
=1 (1
1
) (1 ) 1{ = , + 1, }. After applying the product operator,
this becomes [=1 (1
1
)] =1 =1 =1 1{ = , + 1, }. Then if we define
=1 =1
= [=1 (1
1
)], this expression becomes 1{ = , + 1, } =
=1
=1
( ) =1 1{ = , + 1, }. Finally, if we let = =1 , we have that the
=1
joint mass function is (1 , , ; , ) = ( ) 1{ = , + 1, } =
=1
(; , )(1 , , ). Since (; , ) = ( ) does not depend on 1 , ,
Question #16: Let 1 , , be independent and each ~( , ). This means that each
Maximum Likelihood Estimator (MLE) of by maximizing the MLE of the sucient statistic.
ln[()] = =1 (1
1
) + =1 ln() + =1 ln(1 ) =1 ln(1 ). Then
differentiating the log likelihood function and equating to zero implies that
=1
=1
=1
ln[()] = + = 0 (1 ) =1 =1 + =1 = 0.
1 1
Then we have =1 =1 =1 + =1 = 0 =1 =1 = 0.
This implies that the Maximum Likelihood Estimator of is = =1 .
=1
Since the random variables are iid, their joint probability density function is thus
+
given by (1 , , ; , ) = =1 ( ; , ) 1{ > } = =1 1 1{ > } =
1
(=1 ) [=1 1{ > }]. This then shows that 1 = =1 and 2 = 1: are
jointly sufficient for and by the Factorization Criterion with (1 , , ) = 1 being
independent of the unknown parameters and and (1 , 2 ; , ) =
1
(1 )
=1 1{2 > } depending on 1 , , only through 1 and 2 .
Since the random variables are iid, their joint density is given by (1 , , ; 1 , 2 ) =
( + )
1
=1 ( ; 1 , 2 ) 1{0 < < 1} = =1 1 2
1 (1 )2 1 1{0 < < 1} =
( )( ) 1 2
( + )
[( 1)(2 )] [1 ]1 1 [(1 1 ) (1 )]2 1 =1 1{0 < < 1}. This then
1 2
Question #18: Let ~(0, ) for > 0. a) Show that 2 is complete and sufficient for the
unknown parameter , and b) show that (0, ) is not a complete family.
2
1
a) Since ~(0, ), we know that (; ) = 2 for . Therefore, by the
2
b) Since ~(0, ), we know that () = 0 for all > 0. Therefore, completeness fails
because we have a nontrivial unbiased estimator of () = 0.
Chapter #10 Sufficiency and Completeness
Question #21: If 1 , , is a random sample from a Bernoulli distribution such that each
~() (1, ) where is the unknown parameter to be estimated, find the
UMVUE for a) () = () = (1 ), and b) () = 2.
a) We first verify that the Bernoulli distribution is a member of the Regular Exponential
Class (REC) by noting that its density can be written as (; ) = (1 )1 =
(1)1
= (1) (1 ) = (1) (1 ) = exp {ln [(1) (1 )]}. This equality
implies (; ) = exp { ln (1) + ln(1 )} = exp { ln (1)} exp{ln(1 )} =
(1 ) exp { ln ( )} = () exp{1 ()1 ()}, so the Bernoulli distribution is a
1
member of the REC by Definition 10.4.2. We then use Theorem 10.4.2, which
guarantees the existence of sufficient statistics for distribution from the REC, to
construct the sufficient statistic 1 = =1 1 ( ) = =1 . Next, we appeal to the
Rao-Blackwell Theorem in justifying the use of 1 (or any one-to-one function of it) in
our search for a UMVUE for () = (1 ). Our initial guess for an estimator is
= (1 ), so we first compute that () = [(1 )] = () ( 2 ) =
1 1 1
( =1 ) [() + ()2 ] = =1 ( ) 2 =1 ( ) + ()2. Thus,
1 1 1 2 (1)
we have calculated that () = 2 (1 ) ( ) = 2 =
1 1
(1 ) (1 ) = (1 ) ( ), which implies that = 1 = 1 [(1 ]
Question #23: If 1 , , is a random sample from a Normal distribution such that each
~(, 9) where is unknown, find the UMVUE for a) the 95th percentile, and b) (1 ),
where is a known constant. Hint: find the conditional distribution of 1 given = , and
apply the Rao-Blackwell Theorem with = (1 ), where define () = 1{ }.
This is what we wish to find a UMVUE for, but since the expectation of a constant is
that constant itself, we simply need to find a UMVUE for . We begin by verifying that
the Normal distribution is a member of the Regular Exponential Class (REC) by noting
1 1
that the density of ~(, 9) can be written as (; ) = exp { 18 ( )2 } =
18
1 2 2
exp { 18 + 18}, where we have that 1 () = 2 and 2 () = . Thus, the
18 9
/
[1{1 }|2 ] = ( ) = ( ), which is a UMVUE for ( ) = ().
3(1)/ 3
a) We first verify that the density is a member of the REC by nothing that it can be
written as (; ) = 1 = exp{ln[ 1 ]} = exp{ln() + ( 1) ln()} =
exp{ln()} exp{( 1) ln()} = exp{( 1) ln()}, where 1 () = ln(). We then
use Theorem 10.4.2, which guarantees the existence of sufficient statistics for REC
distributions, to construct the sufficient statistic 1 = =1 1 ( ) = =1 ln( ). Next,
we appeal to the Rao-Blackwell Theorem in justifying the use of 1 (or any one-to-one
1
function of it) in our search for a UMVUE for . From the hint provided, we initially
1
=1 ln( )
=1 ln( )
guess that = = and check that () = [ ]=
1 1 1 1
[ ln( )] = = . The Lehman-Scheffe Theorem finally guarantees that
=1
1 1
= =1 ln( ) is a UMVUE for since it states that any unbiased estimator which
b) Any UMVUE of the unknown parameter must be a function of the complete and
sufficient statistic 1 = =1 ln( ) by the Lehman-Scheffe Theorem. We begin by
noting that (1 ) = [=1 ln( )] = =1 [ln( )] = =1 [ ln( )] = , so
1 1
we would like to be able to compute ( ) . However, this involves finding
1 (1 )
1 1
[ ln()] since we know that [ ln()] = . We do this by finding the distribution
1
of = ln() using the CDF technique, which shows that ~ () with density
1
(; ) = 1{ > 0}. This is equivalent to ~ ( , 1), so by the Moment
1
Generating Function technique, we see that 1 = =1 ln( ) ~ ( , ). We can
1 1
thus calculate [ ln()] = () 0 1 = = 1, which implies that =
1 1
= is an unbiased estimator of . Then the Lehmann-Scheffe Theorem
=1 ln( )
and variance of the asymptotic normal distribution of the MLE, and f) the UMVUE for .
1 1 2 1
c) Since () = , we have [ ()]2 = [ 2 ] = 4. Then we have that (; ) =
(1 + )(1+) so its log is ln (; ) = ln() (1 + ) ln(1 + ) and ln (; ) =
1 2 1 2 1
ln(1 + ). Finally, ln (; ) = so that [ ln (; )] = . These
2 2 2 2
(1/4 ) 1
results combined allow us to conclude that = (1/2 ) = 2 .
d) We previously verified that this density is a member of the REC and that the statistic
2 = =1 ln(1 + ) is complete and sufficient for . Next, we use the Rao-Blackwell
Theorem in justifying the use of 2 (or any one-to-one function of it) in our search for
1
a UMVUE for . In order to compute (2 ), we need to find the distribution of the
random variable = ln(1 + ), which we do using the CDF technique. We thus have
that () = ( ) = (ln(1 + ) ) = ( 1) = ( 1), so that
then () = () = ( 1) = ( 1) = [(1 + 1)(1+) ] =
e) We previously found that the MLE for is = . From Chapter 9, we
=1 ln(1+ )
know that the MLE for some unknown parameter has an asymptotic normal
distribution with = and 2 = ; that is, ~(, ) for large . We
must therefore find the Cramer-Rao Lower Bound, which can be easily done from the
2
work in part c) above with () = , so that = . This means that we have
2 1
~ (, ) for large . We can similarly argue for the MLE of () = , where we
1 1
see that = = =1 ln(1 + ) by the Invariance Property of the Maximum
Likelihood Estimator. Then using the work done in part c) above for the Cramer-Rao
1 1
Lower Bound, we can conclude that ~ ( , 2 ) for large .
f) We previously verified that this density is a member of the REC and that the statistic
2 = =1 ln(1 + ) is complete and sufficient for where (2 ) = . As in the
1 1
previous question, we have that ( ) = ( ) = 1 which implies that
2 =1 ln(1+ )
1 1
= = is unbiased and a UMVUE for the unknown parameter .
2 =1 ln(1+ )
Chapter #11 Interval Estimation
a) We first find the distribution of the smallest order statistic 1: using the formula
1 (; ) = (; )[1 (; )]1 . We thus need the CDF of the population, which
is given by (; ) = (; ) = = = [ ] =
equal-tailed 100% confidence interval for , c) find a lower 100% confidence limit for
2 / 2
( > ) = , d) find an upper 100% confidence limit for the percentile.
2 2 / 2
a) Since (; ) = 2 1{ > 0}, we know that ~(, 2). The CDF technique
2 2
b) We find that the confidence interval is (1 (2) < < 1+ (2)) =
2 2
2 2 2 2 2 2 2
(1 (2) < 2 =1 2 < 1+ (2)) = (2 =1(2) < 2 < 2 =1(2) ) = .
2 2 1+ 1
2 2
2 2
=1 2 2
=1
Taking square roots gives the desired random interval ( 2 (2) , 2 (2) ).
1+ 1
2 2
2 2
=1
c) From the work done above, a lower confidence limit for is . Since the
2 (2)
2 / 2
quantity () = is a monotonically increasing function of , we can simply
2 2
=1 2 / 2
substitute for into the expression () = by Corollary 11.3.1.
2 (2)
d) We must solve the equation ( > ) = 1 for . From the question above, we
2 / 2 2 2
are given that ( > ) = so we must solve / = 1 for , which gives
2 2
= ln(1 ). By the same reasoning as above, we substitute 2 =1(2) in for
1
2 2
into the expression = () = ln(1 ) to obtain = 2 =1(2) ln(1 ).
1
Question #8: If 1 , , is a random sample from ~(0, ) with > 0 unknown and
: is the largest order statistic, then a) find the probability that the random interval given
by (: , 2: ) contains , and b) find the value of the constant such that the random
interval (: , : ) is a 100(1 )% confidence interval for the parameter .
Question #13: Let 1 , , be a random sample from ~(, ) such that their
1
common distribution is (; , ) = () 1 / 1{ > 0} with the parameter known
but unknown. Derive a 100(1 )% equail-tailed confidence interval for based on the
sufficient statistic for the unknown parameter .
We begin by noting that the given density is a member of the Regular Exponential
Class (REC) since (; , ) = ()1 1 / = ()() exp{1 ()1 ()}
where 1 () = . Then we know that = =1 1 ( ) = =1 is complete sufficient
for the unknown parameter . Next, we need to create a pivotal quantity from ; from
2 2
the distribution in question 7 which is similar, we guess that = = =1
1 1
1
( ) ( 2 )/ 2 = = 2 () 1 /2 , which shows that the
() 2
2
transformed random variable = =1 ~(2, ) 2 (2). This allows
2 2
to compute [/2 (2) < < 1/2 (2)] = 1 , so after substituting in for
2
=1 2
=1
and solving for , we have [2 < < 2 ] = 1 , which is the desired
1/2 (2) /2 (2)