Bayesian Belief Propagation for Image Understanding - PowerPoint PPT Presentation

About This Presentation
Title:

Bayesian Belief Propagation for Image Understanding

Description:

Belief Propagation 'Do the right thing' Bayesian algorithm. ... Show messages passed and beliefs at each stage. show convergence in x steps. ... – PowerPoint PPT presentation

Number of Views:141
Avg rating:3.0/5.0
Slides: 32
Provided by: davidsro8
Learn more at: http://www.ai.mit.edu
Category:

less

Transcript and Presenter's Notes

Title: Bayesian Belief Propagation for Image Understanding


1
Bayesian Belief Propagation for Image
Understanding
  • David Rosenberg

2
Markov Random Fields
  • Let G be an undirected graph
  • nodes 1, , n
  • Associate a random variable X_t to each node t in
    G.
  • (X_1, , X_n) is a Markov random field on G if
  • Every r.v. is independent of its nonneighbors
    conditioned on its neighbors.
  • P(X_tx_t X_s x_s for all s \neq t
    P(X_tx_t X_s x_s for all s\in N(t)),where
    N(s) be the set of neighbors of a node s.

3
Specifying a Markov Random Field
  • Nice if we could just specify P( X N(X) )for
    all r.v.s X (as with Bayesian networks)
  • Unfortunately, this will overspecify the joint
    PDF.
  • E.g. X_1 -- X_2.
  • Joint PDF has 3 degrees of freedom
  • Conditiona PDFs X_1X_2 and X_2X_1 have 2
    degrees of freedom each
  • The Hammersley-Clifford Theorem helps to specify
    MRFs

4
The Gibbs Distribution
  • A Gibbs distribution w.r.t. graph G is a
    probability mass function that can be expressed
    in the form
  • P(x_1, , x_n) Prod _ Cliques C V_C(x_1, ..,
    x_n)
  • where V_C(x_1, , x_n) depends only on those x_I
    in C.
  • We can combine potential functions into products
    from maximal cliques, so
  • P(x_1, , x_n) Prod _ MaxCliques C V_C(x_1,
    .., x_n)
  • This may be better in certain circumstances
    because we dont have to specify as many
    potential functions

5
Hammersley Clifford Theorem
  • Let the r.vs X_j have a positive joint
    probability mass function.
  • Then the Hammersley Clifford Theorem says that
    X_j is a Markov random field on graph G iff it
    has a Gibbs distirubtion w.r.t G.
  • Side Note Hammserley and Clifford discovered
    this theorem in 1971, but they didnt publish it
    because they kept thinking they should be able to
    remove or relax the positivity assumption. They
    couldnt. Clifford published the result in 1990.
  • Specifying the potential functions is equivalent
    to specifying the joint probability distribution
    of all variables.
  • Now its easy to specify a valid MRF
  • still not easy to determine the degrees of
    freedom in the distribution (normalization)

6
A Typical MRF Vision Problem
  • We have
  • hidden scene variables X_j
  • observed image variables Y_j
  • Given X_j, Y_j is independent of everything else
  • Show Picture
  • The Problems
  • Given Some instantiations of the Y_js
  • Find
  • The aposteriori distribution over the X_js
  • Find the MAP estimate for each X_j
  • Find the least squares estimate of each X_j

7
Straightforward Exact Inference
  • Given the joint PDF
  • typically specified using potential functions
  • We can just marginalize out to
  • get the aposteriori distribution for each X_j
  • We can immediately extract the
  • MAP estimate -- just the mode of the aposteriori
    distriubtion
  • Least squares estimate -- just the expected value
    of the aposteriori distribution

8
Inference by Message Passing
  • The resulting aposteriori distributions are
    exact for graphs without loops (Pearl?)
  • Weiss and Freeman show that for arbitrary graph
    topologies, when belief propagation converges, it
    gives the correct least squares estimate (I.e.
    posterior mean)
  • More results?

9
Derivation of belief propagation
10
The posterior factorizes
11
Propagation rules
12
Propagation rules
13
Belief, and message updates
j
j
i

i
14
Optimal solution in a chain or treeBelief
Propagation
  • Do the right thing Bayesian algorithm.
  • For Gaussian random variables over time Kalman
    filter.
  • For hidden Markov models forward/backward
    algorithm (and MAP variant is Viterbi).p

15
No factorization with loops!
16
First Toy Examples
  • Show messages passed and beliefs at each stage
  • show convergence in x steps.

17
Where does Evidence Fit In?
18
The Cost Functional Approach
  • We can state the solution to many problems in
    terms of minimizing a cost functional.
  • How can we put this our MRF framework?

19
Slide on Weisss Interior/exterior Example
  • show graphs of convergence speed

20
Slide on Weisss Motion Detection
21
My own computer example taking the cost
functional approach
22
Discussion of complexity issues with message
passing
  • How long are messages
  • How many messages do we have to pass per
    iteration
  • How many iterations until convergence
  • Problem quickly becomes intractible

23
Mention some apprxomiate inference approaches
24
Slides on message passing with jointly gaussian
distributions???
25
EXTRA SLIDES
26
Incorporating Evidence nodes into MRFs
  • We would like to have nodes that dont change
    their beliefs -- they are just observations.
  • Can we do this via the potential functions on the
    non-maximal clique containing just that node?
  • I tink this is what they do in the Yair Weiss
    implementation
  • What if we dont want to specify a potential
    function? Make it identically one, since its in
    a product.

27
From cost functional to transition matrix
28
From cost functional to update rule
29
From update rule to transition matrix
30
The factoriation into pair wise potentials --
good for general Markov networks
31
Other Stuff
  • For shorthand, we will write x (x_1, , x_n).
Write a Comment
User Comments (0)
About PowerShow.com