Professional Documents
Culture Documents
Chapter 4
Some Basic Concepts on Probability
4.1 Introduction
The term random experiment is used to denote a
process in which the outcome cannot be
determined in advance. When the outcome of a
random experiment is or can be identified with a
number, this numerical outcome is referred to as
a random variable. A random variable will be
denoted by a capital letter, such as X ; and
possible values of X are denoted by lower case
letter x .
f X (x ) = P( X = x )
(b*) f X ( x ) = 1
x
4.4 Expectation
Expected Value
The expected value of a function g ( X ) of
random variable X with possible values
x1 , L, xn ,L and probability function f X is
defined to be
E ( g ( X )) = g ( x1) f X ( x1) + L + g ( xn ) f X ( xn ) + ...
E ( X ) = xf ( x)dx and
Var ( X ) = ( x E ( X )) 2 f ( x)dx .
Example 4.5.1
Let X has density function given by
f ( x ) = e x , x > 0 and > 0 .
It is easy to verify that f is a genuine density
function. Moreover,
1 1
E( X ) = and Var ( X ) = .
2
Other random variables with continuous
distributions that we encountered in this course
together with their means and variances are
listed below:
Uniform random variable defined on interval
[a , b ]
1
f ( x) = , a xb
ba
a+b (b a )2
E( X ) = , Var ( X ) = .
2 12
Gamma random variable with parameters and
.
f ( x) = ( x ) 1 e x , x > 0 ; > 0 , > 0 .
( )
Leong YK & Wong WY Introduction to Statistical Decisions 9
E ( X ) = , Var ( X ) = .
2
4.6 Median
Median
Let X be a random variable. Number m
is said
to be a median of the distribution of X if
P( X m) 1 / 2 and P( X m) 1 / 2 .
Example 4.6.1
Let X be a Bernoulli random variable with
parameter . Find a median of the distribution of
X.
4.7 Conditional Distributions
After we observe a random variable, we want to
adjust the probabilities associated with the ones
that have not yet been observed. In many
situations the parameter is treated as a random
variable. In this case, the conditional distribution
of random variable X given will be the
distribution that we would use for X after we
learn the value of .
We write P( X = x | = ) = f ( x | ) . Here,
f ( x , ) = P ( X = x , = )
is called the joint probability function of X and
Interchanging the role of X and , the conditional
probability of given that X = x is given by
P ( X = x , = )
P( = | X = x ) = . (4.7.2)
P( X = x )
We call P( X = x ) = f X ( x ) the marginal probability
of X . It follows from (4.7.1) and (4.7.2) that
P ( = | X = x ) P ( X = x ) = P ( X = x | = ) P ( = )
If we write
( ) = P( = ) , ( | x ) = P ( = | X = x ) ,
As a function in ,
f ( x | ) ( )
( | x ) = f ( x | ) ( ) . (*)
f X ( x)
( | x ) f ( x | ) ( ) x (1 )n x .
This implies that if X = x is observed, has the
beta distribution with parameter = x + 1 and
= n x + 1. In particular, if n = 2 and x = 1, then
the conditional density function of given that
X = 1 is
( | 1) = 6 (1 ) , 0 1.
Example 4.7.2
Suppose that the proportion of defective items
in a large manufactured lot is known to be either
0.1 or 0.2, and the prior probability function of
is as follows:
: P( = 0.1) = w , P( = 0.2) = 1 w , 0 w 1.
Suppose that four items are selected at random
and it is found that exactly one of them is
defective. Let X denote the number of defective
items found in the sample. Then the posterior
probabilities of is computed as follows:
(0.2)1(0.8)3 (1 w)
Therefore
(0.1)(0.9)3 w
P( = 0.1 | X = 1) =
(0.1)(0.9)3 w + (0.2)(0.8)3 (1 w)
729 w
=
729 w + 1024(1 w)
729w
= .
1024 295w
P( = 0.2 | X = 1) = 1 P ( = 0.1 | X = 1) .
Example 4.7.3
Suppose that the conditional density function of
X given that = is given by
f ( x | ) = x 1 , 0 < x < 1 ; > 0 .
If the prior distribution of is the gamma
distribution with parameters and . Determine
and mean of the posterior distribution of .
+1
Answer :
log x