You are on page 1of 120

2015 Cengage Learning. All Rights Reserved.

May not be scanned, copied


Slide 1
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 2
or duplicated, or posted to a publicly accessible website, in whole or in part.
Managers often base their decisions on an analysis
of uncertainties such as the following:

What are the chances that sales will decrease


if we increase prices?

What is the likelihood a new assembly method


will increase productivity?

What are the odds that a new investment will


be profitable?

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 3
or duplicated, or posted to a publicly accessible website, in whole or in part.
Probability is a numerical measure of the likelihood
that an event will occur.

Probability values are always assigned on a scale


from 0 to 1.

A probability near zero indicates an event is quite


unlikely to occur.

A probability near one indicates an event is almost


certain to occur.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 4
or duplicated, or posted to a publicly accessible website, in whole or in part.
Increasing Likelihood of Occurrence

0 .5 1
Probability:

The event The occurrence The event


is very of the event is is almost
unlikely just as likely as certain
to occur. it is unlikely. to occur.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 5
or duplicated, or posted to a publicly accessible website, in whole or in part.
In statistics, the notion of an experiment differs
somewhat from that of an experiment in the
physical sciences.

In statistical experiments, probability determines


outcomes.

Even though the experiment is repeated in exactly


the same way, an entirely different outcome may
occur.

For this reason, statistical experiments are some-


2015
times called random experiments.
Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 6
or duplicated, or posted to a publicly accessible website, in whole or in part.
An experiment is any process that generates well-
defined outcomes.

The sample space for an experiment is the set of


all experimental outcomes.

An experimental outcome is also called a sample


point.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 7
or duplicated, or posted to a publicly accessible website, in whole or in part.
Experiment Experiment Outcomes
Toss a coin Head, tail
Inspection a part Defective, non-defective
Conduct a sales call Purchase, no purchase
Roll a die 1, 2, 3, 4, 5, 6
Play a football game Win, lose, tie

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 8
or duplicated, or posted to a publicly accessible website, in whole or in part.
Investment Gain or Loss
in 3 Months (in $1000s)
Markley Oil Collins Mining
10 8
5 -2
0
-20

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 9
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 10
or duplicated, or posted to a publicly accessible website, in whole or in part.
Markley Oil: n1 = 4
Collins Mining: n2 = 2
Total Number of
Experimental Outcomes: n1n2 = (4)(2) = 8

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 11
or duplicated, or posted to a publicly accessible website, in whole or in part.
Markley Oil Collins Mining Experimental
(Stage 1) (Stage 2) Outcomes
Gain 8 (10, 8) Gain $18,000
(10, -2) Gain $8,000
Gain 10 Lose 2
Gain 8 (5, 8) Gain $13,000

Lose 2 (5, -2) Gain $3,000


Gain 5
Gain 8
(0, 8) Gain $8,000
Even
(0, -2) Lose $2,000
Lose 20 Lose 2
Gain 8 (-20, 8) Lose $12,000
Lose 2 (-20, -2) Lose $22,000
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 12
or duplicated, or posted to a publicly accessible website, in whole or in part.
N N!
CnN
n n !(N - n )!

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 13
or duplicated, or posted to a publicly accessible website, in whole or in part.
N N!
PnN n !
n (N - n )!

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 14
or duplicated, or posted to a publicly accessible website, in whole or in part.
1. The probability assigned to each experimental
outcome must be between 0 and 1, inclusively.

0 < P(Ei) < 1 for all i

where:
Ei is the ith experimental outcome
and P(Ei) is its probability

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 15
or duplicated, or posted to a publicly accessible website, in whole or in part.
2. The sum of the probabilities for all experimental
outcomes must equal 1.

P(E1) + P(E2) + . . . + P(En) = 1

where:
n is the number of experimental outcomes

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 16
or duplicated, or posted to a publicly accessible website, in whole or in part.
Classical Method
Assigning probabilities based on the assumption
of equally likely outcomes

Relative Frequency Method


Assigning probabilities based on experimentation
or historical data

Subjective Method
Assigning probabilities based on judgment

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 17
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 18
or duplicated, or posted to a publicly accessible website, in whole or in part.
Number of Number
Polishers Rented of Days
0 4
1 6
2 18
3 10
4 2
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 19
or duplicated, or posted to a publicly accessible website, in whole or in part.
Number of Number
Polishers Rented of Days Probability
0 4 .10
1 6 .15
2 18 .45 4/40
3 10 .25
4 2 .05
40 1.00
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 20
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 21
or duplicated, or posted to a publicly accessible website, in whole or in part.
Exper. Outcome Net Gain or Loss Probability
(10, 8) $18,000 Gain .20
(10, -2) $8,000 Gain .08
(5, 8) $13,000 Gain .16
(5, -2) $3,000 Gain .26
(0, 8) $8,000 Gain .10
(0, -2) $2,000 Loss .12
(-20, 8) $12,000 Loss .02
(-20, -2) $22,000 Loss .06
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 22
or duplicated, or posted to a publicly accessible website, in whole or in part. 1.00
An event is a collection of sample points.

The probability of any event is equal to the sum of


the probabilities of the sample points in the event.

If we can identify all the sample points of an


experiment and assign a probability to each, we
can compute the probability of an event.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 23
or duplicated, or posted to a publicly accessible website, in whole or in part.
Event M = Markley Oil Profitable
M = {(10, 8), (10, -2), (5, 8), (5, -2)}
P(M) = P(10, 8) + P(10, -2) + P(5, 8) + P(5, -2)
= .20 + .08 + .16 + .26
= .70

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 24
or duplicated, or posted to a publicly accessible website, in whole or in part.
Event C = Collins Mining Profitable
C = {(10, 8), (5, 8), (0, 8), (-20, 8)}
P(C) = P(10, 8) + P(5, 8) + P(0, 8) + P(-20, 8)
= .20 + .16 + .10 + .02
= .48

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 25
or duplicated, or posted to a publicly accessible website, in whole or in part.
Complement of an Event

Union of Two Events

Intersection of Two Events

Mutually Exclusive Events

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 26
or duplicated, or posted to a publicly accessible website, in whole or in part.
The complement of event A is defined to be the event
consisting of all sample points that are not in A.

The complement of A is denoted by Ac.

Sample
Event A Ac Space S

Venn
Diagram
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 27
or duplicated, or posted to a publicly accessible website, in whole or in part.
The union of events A and B is the event containing
all sample points that are in A or B or both.

The union of events A and B is denoted by A B

Sample
Event A Event B Space S

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 28
or duplicated, or posted to a publicly accessible website, in whole or in part.
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
M C = Markley Oil Profitable
or Collins Mining Profitable (or both)
M C = {(10, 8), (10, -2), (5, 8), (5, -2), (0, 8), (-20, 8)}
P(M C) = P(10, 8) + P(10, -2) + P(5, 8) + P(5, -2)
+ P(0, 8) + P(-20, 8)
= .20 + .08 + .16 + .26 + .10 + .02
2015 = Reserved.
Cengage Learning. All Rights .82 May not be scanned, copied
Slide 29
or duplicated, or posted to a publicly accessible website, in whole or in part.
The intersection of events A and B is the set of all
sample points that are in both A and B.

The intersection of events A and B is denoted by A

Sample
Event A Event B Space S

Intersection of A and B
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 30
or duplicated, or posted to a publicly accessible website, in whole or in part.
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
M C = Markley Oil Profitable
and Collins Mining Profitable
M C = {(10, 8), (5, 8)}
P(M C) = P(10, 8) + P(5, 8)
= .20 + .16
= .36
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 31
or duplicated, or posted to a publicly accessible website, in whole or in part.
The addition law provides a way to compute the
probability of event A, or B, or both A and B occurring.

The law is written as:

P(A B) = P(A) + P(B) - P(A B

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 32
or duplicated, or posted to a publicly accessible website, in whole or in part.
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
M C = Markley Oil Profitable
or Collins Mining Profitable
We know: P(M) = .70, P(C) = .48, P(M C) = .36
Thus: P(M C) = P(M) + P(C) - P(M C)
= .70 + .48 - .36
= .82
(This result is the same as that obtained earlier
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
using the definition of the probability of an event.)
or duplicated, or posted to a publicly accessible website, in whole or in part.
Slide 33
Two events are said to be mutually exclusive if the
events have no sample points in common.

Two events are mutually exclusive if, when one event


occurs, the other cannot occur.

Sample
Event A Event B Space S

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 34
or duplicated, or posted to a publicly accessible website, in whole or in part.
If events A and B are mutually exclusive, P(A B = 0.

The addition law for mutually exclusive events is:

P(A B) = P(A) + P(B)

There is no need to
include - P(A B

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 35
or duplicated, or posted to a publicly accessible website, in whole or in part.
The probability of an event given that another event
has occurred is called a conditional probability.

The conditional probability of A given B is denoted


by P(A|B).

A conditional probability is computed as follows :

P( A B)
P( A|B)
P( B)

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 36
or duplicated, or posted to a publicly accessible website, in whole or in part.
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
P(C| M ) = Collins Mining Profitable
given Markley Oil Profitable
We know: P(M C) = .36, P(M) = .70
P(C M ) .36
Thus: P(C | M ) .5143
P( M ) .70

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 37
or duplicated, or posted to a publicly accessible website, in whole or in part.
The multiplication law provides a way to compute the
probability of the intersection of two events.

The law is written as:

P(A B) = P(B)P(A|B)

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 38
or duplicated, or posted to a publicly accessible website, in whole or in part.
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
M C = Markley Oil Profitable
and Collins Mining Profitable
We know: P(M) = .70, P(C|M) = .5143
Thus: P(M C) = P(M)P(M|C)
= (.70)(.5143)
= .36
(This result is the same as that obtained earlier
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
using the definition of the probability of an event.)
or duplicated, or posted to a publicly accessible website, in whole or in part.
Slide 39
Collins Mining
Markley Oil Profitable (C) Not Profitable (Cc) Total

Profitable (M) .36 .34 .70


Not Profitable (Mc) .12 .18 .30

Total .48 .52 1.00

Joint Probabilities
(appear in the body
Marginal Probabilities
of the table)
(appear in the margins
of the table)
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 40
or duplicated, or posted to a publicly accessible website, in whole or in part.
If the probability of event A is not changed by the
existence of event B, we would say that events A
and B are independent.

Two events A and B are independent if:

P(A|B) = P(A) or P(B|A) = P(B)

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 41
or duplicated, or posted to a publicly accessible website, in whole or in part.
The multiplication law also can be used as a test to see
if two events are independent.

The law is written as:

P(A B) = P(A)P(B)

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 42
or duplicated, or posted to a publicly accessible website, in whole or in part.
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
Are events M and C independent?
Does P(M C) = P(M)P(C) ?
We know: P(M C) = .36, P(M) = .70, P(C) = .48
But: P(M)P(C) = (.70)(.48) = .34, not .36
Hence: M and C are not independent.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 43
or duplicated, or posted to a publicly accessible website, in whole or in part.
Do not confuse the notion of mutually exclusive
events with that of independent events.

Two events with nonzero probabilities cannot be


both mutually exclusive and independent.

If one mutually exclusive event is known to occur,


the other cannot occur.; thus, the probability of the
other event occurring is reduced to zero (and they
are therefore dependent).

Two events that are not mutually exclusive, might


2015 or might
Cengage Learning. All Rightsnot be independent.
Reserved. May not be scanned, copied
Slide 44
or duplicated, or posted to a publicly accessible website, in whole or in part.
Application
Prior New Posterior
of Bayes
Probabilities Information Probabilities
Theorem

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 45
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 46
or duplicated, or posted to a publicly accessible website, in whole or in part.
A1 = town council approves the zoning change
A2 = town council disapproves the change

P(A1) = .7, P(A2) = .3

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 47
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 48
or duplicated, or posted to a publicly accessible website, in whole or in part.
P(B|A1) = .2 P(B|A2) = .9

P(BC|A1) = .8 P(BC|A2) = .1

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 49
or duplicated, or posted to a publicly accessible website, in whole or in part.
Town Council Planning Board Experimental
Outcomes

P(B|A1) = .2
P(A1 B) = .14
P(A1) = .7
c
P(B |A1) = .8 P(A1 Bc) = .56

P(B|A2) = .9
P(A2 B) = .27
P(A2) = .3
c
P(B |A2) = .1 P(A2 Bc) = .03
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
or duplicated, or posted to a publicly accessible website, in whole or in part. 1.00 Slide 50
P( Ai )P( B| Ai )
P( Ai |B)
P( A1 )P( B| A1 ) P( A2 )P( B| A2 ) ... P( An )P( B| An )

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 51
or duplicated, or posted to a publicly accessible website, in whole or in part.
P( A1 )P( B| A1 )
P( A1 |B)
P( A1 )P( B| A1 ) P( A2 )P( B| A2 )
(. 7 )(. 2 )

(. 7 )(. 2 ) (. 3)(. 9)
= .34

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 52
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 53
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 54
or duplicated, or posted to a publicly accessible website, in whole or in part.
(1) (2) (3) (4) (5)
Prior Conditional
Events Probabilities Probabilities
Ai P(Ai) P(B|Ai)
A1 .7 .2
A2 .3 .9
1.0

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 55
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 56
or duplicated, or posted to a publicly accessible website, in whole or in part.
(1) (2) (3) (4) (5)
Prior Conditional Joint
Events Probabilities Probabilities Probabilities
Ai P(Ai) P(B|Ai) P(Ai I B)

A1 .7 .2 .14
A2 .3 .9 .27
.7 x .2
1.0

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 57
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 58
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 59
or duplicated, or posted to a publicly accessible website, in whole or in part.
(1) (2) (3) (4) (5)
Prior Conditional Joint
Events Probabilities Probabilities Probabilities
Ai P(Ai) P(B|Ai) P(Ai I B)
A1 .7 .2 .14
A2 .3 .9 .27
1.0 P(B) = .41

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 60
or duplicated, or posted to a publicly accessible website, in whole or in part.
P( Ai B)
P( Ai | B)
P( B)

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 61
or duplicated, or posted to a publicly accessible website, in whole or in part.
(1) (2) (3) (4) (5)
Prior Conditional Joint Posterior
Events Probabilities Probabilities Probabilities Probabilities
Ai P(Ai) P(B|Ai) P(Ai I B) P(Ai |B)

A1 .7 .2 .14 .3415
A2 .3 .9 .27 .6585
1.0 P(B) = .41 1.0000

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
.14/.41
Slide 62
or duplicated, or posted to a publicly accessible website, in whole or in part.
.40

.30

.20

.10

0 1 2 3 4

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 63
or duplicated, or posted to a publicly accessible website, in whole or in part.
A random variable is a numerical description of the
outcome of an experiment.

A discrete random variable may assume either a


finite number of values or an infinite sequence of
values.

A continuous random variable may assume any


numerical value in an interval or collection of
intervals.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 64
or duplicated, or posted to a publicly accessible website, in whole or in part.
Let x = number of TVs sold at the store in one day,
where x can take on 5 values (0, 1, 2, 3, 4)

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 65
or duplicated, or posted to a publicly accessible website, in whole or in part.
Let x = number of customers arriving in one day,
where x can take on the values 0, 1, 2, . . .

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 66
or duplicated, or posted to a publicly accessible website, in whole or in part.
Question Random Variable x Type

Family x = Number of dependents Discrete


size reported on tax return
Distance from x = Distance in miles from Continuous
home to store home to the store site
Own dog x = 1 if own no pet; Discrete
or cat = 2 if own dog(s) only;
= 3 if own cat(s) only;
= 4 if own dog(s) and cat(s)

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 67
or duplicated, or posted to a publicly accessible website, in whole or in part.
The probability distribution for a random variable
describes how probabilities are distributed over
the values of the random variable.

We can describe a discrete probability distribution


with a table, graph, or formula.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 68
or duplicated, or posted to a publicly accessible website, in whole or in part.
Two types of discrete probability distributions will
be introduced.

First type: uses the rules of assigning probabilities


to experimental outcomes to determine probabilities
for each value of the random variable.

Second type: uses a special mathematical formula


to compute the probabilities for each value of the
random variable.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 69
or duplicated, or posted to a publicly accessible website, in whole or in part.
The probability distribution is defined by a
probability function, denoted by f(x), that provides
the probability for each value of the random variable.

The required conditions for a discrete probability


function are:
f(x) > 0

f(x) = 1

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 70
or duplicated, or posted to a publicly accessible website, in whole or in part.
There are three methods for assign probabilities to
random variables: classical method, subjective
method, and relative frequency method.

The use of the relative frequency method to develop


discrete probability distributions leads to what is
called an empirical discrete distribution.
example
on next
slide

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 71
or duplicated, or posted to a publicly accessible website, in whole or in part.
Number 80/200
Units Sold of Days x f(x)
0 80 0 .40
1 50 1 .25
2 40 2 .20
3 10 3 .05
4 20 4 .10
200 1.00
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 72
or duplicated, or posted to a publicly accessible website, in whole or in part.
Graphical
.50 representation
of probability
.40 distribution
Probability
.30
.20
.10

0 1 2 3 4
Values
2015 Cengage Learning. All Rights of Random
Reserved. May not beVariable x (TV
scanned, copied sales) Slide 73
or duplicated, or posted to a publicly accessible website, in whole or in part.
In addition to tables and graphs, a formula that
gives the probability function, f(x), for every value
of x is often used to describe the probability
distributions.

Several discrete probability distributions specified


by formulas are the discrete-uniform, binomial,
Poisson, and hypergeometric distributions.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 74
or duplicated, or posted to a publicly accessible website, in whole or in part.
The discrete uniform probability distribution is the
simplest example of a discrete probability
distribution given by a formula.

The discrete uniform probability function is

f(x) = 1/n the values of the


random variable
are equally likely
where:
n = the number of values the random
variable may assume

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 75
or duplicated, or posted to a publicly accessible website, in whole or in part.
The expected value, or mean, of a random variable
is a measure of its central location.
E(x) = = xf(x)

The expected value is a weighted average of the


values the random variable may assume. The
weights are the probabilities.

The expected value does not have to be a value the


random variable can assume.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 76
or duplicated, or posted to a publicly accessible website, in whole or in part.
The variance summarizes the variability in the
values of a random variable.

Var(x) = 2 = (x - )2f(x)

The variance is a weighted average of the squared


deviations of a random variable from its mean.
The weights are the probabilities.

The standard deviation, , is defined as the


positive square root of the variance.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 77
or duplicated, or posted to a publicly accessible website, in whole or in part.
x f(x) xf(x)
0 .40 .00
1 .25 .25
2 .20 .40
3 .05 .15
4 .10 .40
E(x) = 1.20

expected number of
2015 Cengage Learning. TVs Reserved.
All Rights sold inMay
a day
not be scanned, copied
Slide 78
or duplicated, or posted to a publicly accessible website, in whole or in part.
x x- (x - )2 f(x) (x - )2f(x)
0 -1.2 1.44 .40 .576
1 -0.2 0.04 .25 .010
2 0.8 0.64 .20 .128
3 1.8 3.24 .05 .162 TVs
4 2.8 7.84 .10 .784 squared
Variance of daily sales = 2 = 1.660
Standard deviation of daily sales = 1.2884 TVs
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 79
or duplicated, or posted to a publicly accessible website, in whole or in part.
A probability distribution involving two random
variables is called a bivariate probability distribution.

Each outcome of a bivariate experiment consists of


two values, one for each random variable.
Example: rolling a pair of dice

When dealing with bivariate probability distributions,


we are often interested in the relationship between
the random variables.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 80
or duplicated, or posted to a publicly accessible website, in whole or in part.
Benefits Job Satisfaction (y)
Package (x) 1 2 3 Total
1 28 26 4 58
2 22 42 34 98
3 2 10 32 44

Total 52 78 70 200

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 81
or duplicated, or posted to a publicly accessible website, in whole or in part.
Benefits Job Satisfaction (y)
Package (x) 1 2 3 Total
1 .14 .13 .02 .29
2 .11 .21 .17 .49
3 .01 .05 .16 .22
Total .26 .39 .35 1.00

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 82
or duplicated, or posted to a publicly accessible website, in whole or in part.
x f(x) xf(x) x - E(x) (x - E(x))2 (x - E(x))2f(x)

1 0.29 0.29 -0.93 0.8649 0.250821

2 0.49 0.98 0.07 0.0049 0.002401

3 0.22 0.66 1.07 1.1449 0.251878

E(x) = 1.93 Var(x) = 0.505100

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 83
or duplicated, or posted to a publicly accessible website, in whole or in part.
y f(y) yf(y) y - E(y) (y - E(y))2 (y - E(y))2f(y)

1 0.26 0.26 -1.09 1.1881 0.308906

2 0.39 0.78 -0.09 0.0081 0.003159

3 0.35 1.05 0.91 0.8281 0.289835

E(y) = 2.09 Var(y) = 0.601900

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 84
or duplicated, or posted to a publicly accessible website, in whole or in part.
s f(s) sf(s) s - E(s) (s - E(s))2 (s - E(s))2f(s)

2 0.14 0.28 -2.02 4.0804 0.571256


3 0.24 0.72 -1.02 1.0404 0.249696
4 0.24 0.96 -0.02 0.0004 0.000960
5 0.22 1.10 0.98 0.9604 0.211376
6 0.16 0.96 1.98 3.9204 0.627264
E(s) = 4.02 Var(s) = 1.660552
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 85
or duplicated, or posted to a publicly accessible website, in whole or in part.
Varxy = [Var(x + y) Var(x) Var(y)]/2

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 86
or duplicated, or posted to a publicly accessible website, in whole or in part.
x 0.5051 0.7107038

y 0.6019 0.7758221
xy
xy xy 0.276776 0.526095
x y
0.526095
xy 0.954
0.7107038(0.7758221)

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 87
or duplicated, or posted to a publicly accessible website, in whole or in part.
1. The experiment consists of a sequence of n
identical trials.

2. Two outcomes, success and failure, are possible


on each trial.

3. The probability of a success, denoted by p, does


not change from trial to trial.
stationarity
4. The trials are independent. assumption
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 88
or duplicated, or posted to a publicly accessible website, in whole or in part.
Our interest is in the number of successes
occurring in the n trials.

We let x denote the number of successes


occurring in the n trials.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 89
or duplicated, or posted to a publicly accessible website, in whole or in part.
n!
f (x) p x (1 - p )( n - x )
x !(n - x )!
where:
x = the number of successes
p = the probability of a success on one trial
n = the number of trials
f(x) = the probability of x successes in n trials
n! = n(n 1)(n 2) .. (2)(1)

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 90
or duplicated, or posted to a publicly accessible website, in whole or in part.
n!
f (x) p x (1 - p )( n - x )
x !(n - x )!

Probability of a particular
Number of experimental
sequence of trial outcomes
outcomes providing exactly
with x successes in n trials
x successes in n trials

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 91
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 92
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 93
or duplicated, or posted to a publicly accessible website, in whole or in part.
Experimental Probability of
Outcome Experimental Outcome
(S, F, F) p(1 p)(1 p) = (.1)(.9)(.9) = .081
(F, S, F) (1 p)p(1 p) = (.9)(.1)(.9) = .081
(F, F, S) (1 p)(1 p)p = (.9)(.9)(.1) = .081
Total = .243

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 94
or duplicated, or posted to a publicly accessible website, in whole or in part.
Using the
Let: p = .10, n = 3, x = 1 probability
function
n!
f ( x) p x (1 - p ) (n - x )
x !( n - x )!
3!
f (1) (0.1)1 (0.9)2 3(.1)(.81) .243
1!(3 - 1)!

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 95
or duplicated, or posted to a publicly accessible website, in whole or in part.
Using a tree diagram
1st Worker 2nd Worker 3rd Worker x Prob.
L (.1) 3 .0010
Leaves (.1)
S (.9) 2 .0090
Leaves
(.1) L (.1) 2 .0090
Stays (.9)
S (.9) 1 .0810
L (.1) 2 .0090
Leaves (.1)
Stays S (.9) 1 .0810
(.9) L (.1)
1 .0810
Stays (.9)
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 96
S part.
or duplicated, or posted to a publicly accessible website, in whole or in (.9) 0 .7290
Statisticians have developed tables that give
probabilities and cumulative probabilities for a
binomial random variable.

These tables can be found in some statistics


textbooks.

With modern calculators and the capability of


statistical software packages, such tables are
almost unnecessary.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 97
or duplicated, or posted to a publicly accessible website, in whole or in part.
p
n x .05 .10 .15 .20 .25 .30 .35 .40 .45 .50
3 0 .8574 .7290 .6141 .5120 .4219 .3430 .2746 .2160 .1664 .1250
1 .1354 .2430 .3251 .3840 .4219 .4410 .4436 .4320 .4084 .3750
2 .0071 .0270 .0574 .0960 .1406 .1890 .2389 .2880 .3341 .3750
3 .0001 .0010 .0034 .0080 .0156 .0270 .0429 .0640 .0911 .1250

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 98
or duplicated, or posted to a publicly accessible website, in whole or in part.
Expected Value

E(x) = = np

Variance

Var(x) = 2 = np(1 - p)

Standard Deviation

np(1 - p )

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 99
or duplicated, or posted to a publicly accessible website, in whole or in part.
Example: Evans Electronics

Expected Value

E(x) = np = 3(.1) = .3 employees out of 3

Variance

Var(x) = np(1 p) = 3(.1)(.9) = .27

Standard Deviation

3(.1)(.9) .52 employees

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 100
or duplicated, or posted to a publicly accessible website, in whole or in part.
A Poisson distributed random variable is often
useful in estimating the number of occurrences
over a specified interval of time or space

It is a discrete random variable that may assume


an infinite sequence of values (x = 0, 1, 2, . . . ).

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 101
or duplicated, or posted to a publicly accessible website, in whole or in part.
Examples of Poisson distributed random variables:

the number of knotholes in 14 linear feet of


pine board

the number of vehicles arriving at a toll


booth in one hour

Bell Labs used the Poisson distribution to model


the arrival of phone calls.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 102
or duplicated, or posted to a publicly accessible website, in whole or in part.
1. The probability of an occurrence is the same
for any two intervals of equal length.

2. The occurrence or nonoccurrence in any


interval is independent of the occurrence or
nonoccurrence in any other interval.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 103
or duplicated, or posted to a publicly accessible website, in whole or in part.
x e-
f ( x)
x!
where:
x = the number of occurrences in an interval
f(x) = the probability of x occurrences in an interval
= mean number of occurrences in an interval
e = 2.71828
x! = x(x 1)(x 2) . . . (2)(1)
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 104
or duplicated, or posted to a publicly accessible website, in whole or in part.
Since there is no stated upper limit for the number
of occurrences, the probability function f(x) is
applicable for values x = 0, 1, 2, without limit.

In practical applications, x will eventually become


large enough so that f(x) is approximately zero
and the probability of any larger values of x
becomes negligible.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 105
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 106
or duplicated, or posted to a publicly accessible website, in whole or in part.
Using the
probability
= 6/hour = 3/half-hour, x = 4 function

3 4 (2.71828)-3
f (4) .1680
4!

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 107
or duplicated, or posted to a publicly accessible website, in whole or in part.
Poisson Probabilities
0.25

0.20
Probability

Actually,
0.15
the sequence
0.10 continues:
11, 12, 13
0.05

0.00
0 1 2 3 4 5 6 7 8 9 10
Number
2015 Cengage Learning. All Rights Reserved. of be
May not Arrivals in 30 Minutes
scanned, copied
Slide 108
or duplicated, or posted to a publicly accessible website, in whole or in part.
A property of the Poisson distribution is that
the mean and variance are equal.
= 2

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 109
or duplicated, or posted to a publicly accessible website, in whole or in part.
=2=3

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 110
or duplicated, or posted to a publicly accessible website, in whole or in part.
The hypergeometric distribution is closely related
to the binomial distribution.

However, for the hypergeometric distribution:

the trials are not independent, and

the probability of success changes from trial


to trial.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 111
or duplicated, or posted to a publicly accessible website, in whole or in part.
r N - r

x n - x
f ( x)
N

n
where: x = number of successes
n = number of trials
f(x) = probability of x successes in n trials
N = number of elements in the population
r = number of elements in the population
labeled success
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 112
or duplicated, or posted to a publicly accessible website, in whole or in part.
r N -r
x n-x

f (x) for 0 < x < r
N
n number of ways
n x failures can be selected
number of ways from a total of N r failures
x successes can be selected in the population
from a total of r successes
in the population number of ways
n elements can be selected
2015 Cengage Learning. All Rights Reserved. May notfrom a population
be scanned, copied of size N
Slide 113
or duplicated, or posted to a publicly accessible website, in whole or in part.
The probability function f(x) on the previous slide
is usually applicable for values of x = 0, 1, 2, n.

However, only values of x where: 1) x < r and


2) n x < N r are valid.

If these two conditions do not hold for a value of


x, the corresponding f(x) equals 0.

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 114
or duplicated, or posted to a publicly accessible website, in whole or in part.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 115
or duplicated, or posted to a publicly accessible website, in whole or in part.
Using the
probability
function
r N - r 2 2 2! 2!
1
x n - x 2 0 2!0! 0!2!
f ( x ) .167
N 4 4! 6

n
2
2!2!

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 116
or duplicated, or posted to a publicly accessible website, in whole or in part.
Mean

r
E ( x) n
N

Variance

r r N - n
Var ( x) n 1 -
2


N N N - 1

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 117
or duplicated, or posted to a publicly accessible website, in whole or in part.
Mean

r 2
n 2 1
N 4

Variance

2 2 4 - 2 1
2 1 -
2
.333
4 4 4 - 1 3
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 118
or duplicated, or posted to a publicly accessible website, in whole or in part.
Consider a hypergeometric distribution with n trials
and let p = (r/n) denote the probability of a success
on the first trial.

If the population size is large, the term


(N n)/(N 1) approaches 1.

The expected value and variance can be written


E(x) = np and Var(x) = np(1 p).

Note that these are the expressions for the expected


value and variance of a binomial distribution.
2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
continued
or duplicated, or posted to a publicly accessible website, in whole or in part.
Slide 119
When the population size is large, a hypergeometric
distribution can be approximated by a binomial
distribution with n trials and a probability of
success p = (r/N).

2015 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide 120
or duplicated, or posted to a publicly accessible website, in whole or in part.

You might also like