Professional Documents
Culture Documents
Probabilistic Inference
-
Probability theory
Bayes Network
Independence
Inference
(B)
|____________|
|
(A)
_______|______
|
(J)
(E)
Evidence
Hidden
|
(M)
Query
P = Probability distribution
Q = Query variable
E = Evidence variable
P(Q1,Q2,...|E1=e1,E2=e2,...)
arg max q
P(Q1=q1,Q2=q2,...|E1=e1,E2=e2,...)
Nodes
( ) Evidence - E
( ) Hidden - H
( ) Query - Q
B
E
A
J
M
E
( )
( )
( )
( )
(*)
(B)
|____________|
|
(A)
_______|______
|
(J)
H
( )
(*)
(*)
(*)
( )
Q
(*)
( )
( )
( )
( )
(E)
|
(M)
Enumeration
Definition of conditional probability:
P(Q|E) = P(Q,E) / P(E)
Notation:
P(E=true)
= P(+e)
= 1 - P(e)
= P(e)
P(+B|+J,+M)
P(+B|+J,+M) = P(+B,+J,+M) / P(+J,+M)
P(+B,+J,+M) = e a P(+B,+J,+M)
e a (P(+B)*P(e)*P(a|+B,e)*P(+J|a)*P(+M,a))/f(e,a)
f(+e,+a)+f(+e,a)+f(e,+a)+f(e,a)
(B)
|____________|
|
(E)
Pgina 1
AI-Unit4
(A)
_______|______
|
(J)
B
E
A
J
M
|
(M)
Burglary
Earthquake
Alarm
John calls
Mary calls
B
+b
b
P(B)
0.001
0.999
E
+e
e
P(E)
0.002
0.998
A
+a
+a
a
a
J
+j
j
+j
j
P(J|A)
0.9
0.1
0.05
0.95
A
+a
+a
a
a
M
+m
m
+m
m
P(M|A)
0.7
0.3
0.01
0.99
B
+b
+b
+b
+b
b
b
b
b
E
+e
+e
e
e
+e
+e
e
e
A
+a
a
P(A|B,E)
0.95
0.05
+a
a
+a
a
+a
a
0.94
0.06
0.29
0.71
0.001
0.999
( ) n
( ) 2^n
(J)--------->(M)
Dependent (*)
Pgina 2
AI-Unit4
Independent ( )
(J)--------->(M)
|
|____________|
|
|
(A)
_______|
|
|
(B)
(J)--------->(M)
|
|____________|
|
|
(A)
_______|______
|
|
(B)--------->(E)
|
|
Causal Direction
--------------->
e a P(+b)*P(e)*P(a|+b,e)*P(+j|a)*P(+m,a)
Variable Elimination
(R)--------->(T)--------->(L)
P(R)
+r
r
0.1
0.9
P(T|R)
+r
+r
r
r
+t
t
+t
t
0.8
0.2
0.1
0.9
P(L|T)
+t
+t
t
t
+l
l
+l
l
0.3
0.7
0.1
0.9
P(+l) = r t P(r)*P(t|r)*P(+l|t)
1. Joining Factors
1. P(R,T)
P(R|T)
+r
+r
r
r
+t
t
+t
t
0.08
0.02
0.09
0.81
->
->
->
->
(0.1
(0.1
(0.9
(0.9
*
*
*
*
(R)--------->(T)--------->(L)
(R,T)------->(L)
Pgina 3
0.8)
0.2)
0.1)
0.9)
AI-Unit4
P(R,T)
+r
+r
r
r
+t
t
+t
t
0.08
0.02
0.09
0.81
P(L|T)
+t
+t
t
t
+l
l
+l
l
0.3
0.7
0.1
0.9
P(T)
+t
t
(R,T)------->(L)
(T)--------->(L)
P(T)
+t
t
0.17
0.83
P(L|T)
+t
+t
t
t
+l
l
+l
l
0.3
0.7
0.1
0.9
P(T|L)
+t
+t
t
t
+l
l
+l
l
0.051
0.119
0.083
0.747
T,L
P(T,L)
+t
+t
t
t
+l
l
+l
l
0.051
0.119
0.083
0.747
P(L)
+l
l
->
->
->
->
(0.17
(0.17
(0.83
(0.83
5$
h
t
h
t
|||||
|||||
|||||
|||||
P(L|T)
+t
+t
t
t
+l
l
+l
l
0.3
0.7
0.1
0.9
P(T|R)
+r
+r
+t
t
0.8
0.2
||
|||
|
|||
Pgina 4
*
*
*
*
0.3)
0.7)
0.1)
0.9)
AI-Unit4
r
r
+t
t
0.1
0.9
New Network
(C)
_______|______
|
|
(S)
|____________|
|
(W)
C
S
R
W
(R)
Cloudy
Sprinkler
Rain
WetGrass
P(C)
+c
c
0.5
0.5
P(S|C)
+c
+c
c
c
+s
s
+s
s
0.1
0.9
0.5
0.5
P(R|C)
+c
+c
c
c
+r
r
+r
r
0.8
0.2
0.2
0.8
P(W|S,R)
+s
+s
+s
+s
s
s
s
s
+r
+r
r
r
+r
+r
r
r
+w
w
0.99
0.01
+w
w
+w
w
+w
w
0.90
0.10
0.90
0.10
0.01
0.99
+Cloudy,-Sprinkler,+Rain
(*) +WetGrass
( ) WetGrass
s,+r,+w - 0.90
s,+r,w - 0.10
Consistent
P(W|C)
c,+s,+r,w
Rejection Sampling
(B)------------->(A)
B - Burglary
A - Alarm
Pgina 5
AI-Unit4
P(B|+a)
b,a
b,a
b,a
b,+a
b,+a
+b,+a
Likelihood Weighting
Inconsistent
Weight Samples:
P(R|+s,+w)
+Cloudy,+Sprinkler,+Rain,+WetGrass
0.1 * 0.99 = 0.099
+c,+s,+r,+w
P(C|+s,+r)
Gibbs Sampling
Markov Chain
Monte Carlo
MCMC
(+c)
(+s)
(w)
(s)
(r)
||
(r)
(+c)
(+c)
(s)
(+r)
(w)
(w)
||
( )
1
(x)
( )
(0)
1
( )
1/3
( )
2/3
3
(0)
Suppose you're on a game show, and you're given the choice of three doors:
Behind one door is a car; behind the others, goats. You pick a door, say No.
1, and the host, who knows what's behind the doors, opens another door, say
No. 3, which has a goat. He then says to you, "Do you want to pick door No.
2?" Is it to your advantage to switch your choice?
Pgina 6