Professional Documents
Culture Documents
Example 1: A test for a certain disease is assumed to be correct 95% of the time: if a
person has the disease, the test results are positive with probability 0.95, and if the person
does not have the disease, the test results are negative with probability 0.95. A random
person has probability of 0.001 of having the disease. Given that a person just tested
positive, what is the probability of him having the disease?
2 IEMS 315, Lecture 2
Solution: Let
A be the event that the tested person has the disease;
B the event that the test results are positive.
We are asked to compute P(A|B). However, we are given P(B|A) and P(B|Ac ), in addition
to P(A). We therefore use Baye’s rule:
Example 2: Suppose you are on a game show, and you are given the choice of three doors.
Behind one of the doors is a car; behind each of the other two doors there is a goat. You
pick a door, say door number 1, and the host, who knows what is behind all the doors, opens
another door, say door number 3, which has a goat behind it. He then asks you if you want
to change your previous choice or not. What should you do? Does it matter which door you
choose?
Solution: First note that the problem is not well defined – non-mathematical issues are
important here. In particular, does the host ask you to change only if he knows that you
chose the right door, or only if you chose the wrong one? Or does he always ask you whether
you want to change you choice? (In fact, one may wonder whether the car is desirable, or a
goat is. We take it for granted that you desire the car.)
It is natural to assume that the host always opens a door with a goat, regardless of your
choice. Given this assumption, it can be shown that it is better to change your choice: If
IEMS 315, Lecture 2 3
you stay with your first choice, then you win with probability 1/3. That was the probability
of choosing the right door before the host opened a door for you, and that has not changed.
However, if you do change your choice, then the probability that your second choice has the
car is 2/3. One way to see why this must be true is that, before the host opened a door
with a goat, there is a probability of 2/3 that the car is behind one of the doors 2 or 3.
However, once the host opens door number 3 and shows you a goat, you know that the 2/3
probability of having the car behind one of the doors 2, 3 has all its mass on door number 2.
Another way to see this is to consider two strategies - in the first you never change, and
in the second you always change (assuming you can run this trial many times). Clearly,
with the first strategy you will win approximately 1/3 of the times. That means that if you
always change, you must win approximately 2/3 of the times, if the number of trials is large
enough.
To read more about this problem, google the “Monty Hall Problem”. There is a nice article
in Wikipedia, including a simple proof. According to Wikipedia, “Even when given expla-
nations, simulations, and formal mathematical proofs, many people still do not accept that
switching is the best strategy.” But hopefully, you do.
(You can play the game online: http://www.math.ucsd.edu/~crypto/Monty/monty.html.
You can try each of the two strategies (switching your initially chosen door, and not switch-
ing) several times and check for yourself whether switching is the better strategy.)
Independent Events
Two events A and B are said to be independent if
It follows from the definition of conditional probability that A and B are independent if
P(A|B) = P(A). That is, the probability that A occurs does not depend on whether B
occurs. So why are we using (1) as a definition? This is because P(A|B) is defined for events
B with P(B) > 0, but we want the concept of independence of events to be be well defined
even when P(B) = 0.
However, it is helpful to think of independence in terms of the conditional probability because
it gives a much more intuitive explanation to the meaning of independence: A and B are
independent if the probability of A occurring, given that B has occurred, is the same as the
probability of A occurring, and vice versa. That is, knowing that B occurred gives no useful
information about the probability of A.
Do not confuse independence of events with the events being disjoint. In fact, two disjoint
events A, B with P(A) > 0 and P(B) > 0 are never independent, since
The simplest example is when A is an event with 0 < P(A) < 1. Then A and Ac are
disjoint, but are clearly not independent; if we know that A occurred, then we know with
certainty (with prob. 1) that Ac did not occur, and vice versa – the two events are completely
dependent!
4 IEMS 315, Lecture 2
Example 1.10, pg. 11 in 10th edition: Pairwise independence does not imply
independence. Let a ball be drawn from an urn containing four balls, numbered 1,2,3,4.
Let E = {1, 2}, F = {1, 3}, G = {1, 4}. If all four outcomes are equally likely, then
However,
1/4 = P(EF G) 6= P(E)P(F )P(G) = 1/23 = 1/8.
That is, the events {Ak : k ≥ 1} are said to be independent, if the events in any finite subset
of this sequence are independent.