BAE 790I / BMME 231 Fundamentals of Image Processing Class 21

1 / 22
About This Presentation
Title:

BAE 790I / BMME 231 Fundamentals of Image Processing Class 21

Description:

Maximum A Posteriori ... Maximum A Posteriori. If the prior also has zero mean, ... Maximum A Posteriori. Using the local-smoothness Gibbs prior ... –

Number of Views:43
Avg rating:3.0/5.0
Slides: 23
Provided by: davidl69
Category:

less

Transcript and Presenter's Notes

Title: BAE 790I / BMME 231 Fundamentals of Image Processing Class 21


1
BAE 790I / BMME 231Fundamentals of Image
ProcessingClass 21
  • Introduction to MAP estimation
  • Gaussian Priors
  • Gibbs Priors
  • Experimental MAP Restoration

2
Recipe Statistical Restoration
  • Choose a criterion for determining which solution
    is better than another
  • MMSE, WLS, LS, ML, MAP, ME, others
  • Express the criterion mathematically
  • Solve for the estimate that optimizes the
    criterion (the algorithm)
  • Direct solution (not always possible)
  • Gradient methods (Steepest descent, Conjugate
    gradient)
  • Gauss-Seidel
  • Simulated annealing, Metropolis algorithm

Algorithms and criteria are more or less
interchangeable, but the resulting problem may be
easier or harder to solve numerically.
3
Maximum A Posteriori
  • We looked at the likelihood function, Pgf
  • How about Pfg? The probability that some f
    resulted in our measurement g.
  • Now, f is random, so it needs some probability
    distribution.
  • ML f is deterministic, unknown
  • MAP f is random, unknown, but something is known
    about the random process from which it is drawn.

4
Maximum A Posteriori
  • From Bayes Rule

Log likelihood
Goes away
Log prior
5
Maximum A Posteriori
  • Consider the prior Pf
  • This indicates that some images f are more likely
    than others.
  • Example I expect a hand X-ray to look like a
    hand and not a retina. Thus, images with hand
    bones are more likely in Pf.
  • Example I expect the true image not to be noisy.
    Thus, smooth images are more likely than noisy
    ones in Pf.

6
Maximum A Posteriori
  • Assume that the prior Pf is Gaussian-distributed
  • Now we have to specify a mean image m and the
    autocovariance about the mean image

7
Maximum A Posteriori
  • The MAP objective function is the sum of the log
    likelihood and the log prior. (The log posterior
    does not depend on f and will go away).
  • If the noise is zero-mean and Gaussian-distributed
    ,

8
Maximum A Posteriori
  • Follow the usual procedure take the derivative
    of the objective function and set to zero
  • This is the MAP solution for Gaussian-distributed
    noise and a Gaussian-distributed prior
  • Note that it is biased.

9
Maximum A Posteriori
  • If the prior also has zero mean,
  • Which is another way to write the Wiener filter!
  • The MMSE estimate is the same as the MAP estimate
    with certain assumptions
  • Gaussian-distributed, zero-mean noise
  • Gaussian-distributed, zero-mean prior
  • This is not always true MMSE is mean of
    posterior while MAP is mode of posterior.

10
Maximum A Posteriori
  • Back to the general form
  • Let the prior be stationary and uncorrelated (Is
    this a good assumption?)

11
Maximum A Posteriori
  • The MAP solution is somewhere between the ML
    solution and the maximum of the prior.
  • b adjusts the solution between these
  • Any prior generally gives a MAP solution with an
    adjustable parameter like b

12
Maximum A Posteriori
  • What is wrong with this approach to a MAP
    solution?

13
Maximum A Posteriori
  • What is wrong with this approach to a MAP
    solution?
  • If m is wrong, we are pushing our solution toward
    the wrong answer!

14
Maximum A Posteriori
  • Solution Dont use a Gaussian-distributed prior
  • Consider a Gibbs prior, another name for a Markov
    Random Field (MRF)

Weight of clique ij
Energy of clique ij
15
Gibbs Prior
  • A clique is a group of pixels defined on a pixel
    lattice with an associated energy function, V
  • V is small when the pixels in the clique have the
    desired property
  • Example Local smoothness Use two-pixel cliques
    of neighboring pixels and let V(fi,fj) (fi
    fj)2

16
Maximum A Posteriori
  • Using the local-smoothness Gibbs prior
  • The prior does not have an explicit mean. Its
    maximum is any image where all pixels are the
    same intensity.

17
Notes on Gibbs Priors
  • Any energy function can be used for any clique.
  • Different energy functions produce different
    properties of smoothing.
  • Any number of pixels may be in a clique, though
    usually two at a time are used.
  • A pixel may be a member of any number of cliques.

18
Notes on MAP Estimation
  • Most priors make the problem nonlinear and
    require iterative methods to solve.
  • Examples of priors
  • Using image information from one medical imaging
    modality (CT) to specify object boundaries in
    another (MR, PET)
  • Priors specifying that neighboring pixels should
    not differ unless they constitute an edge
    (edge-preserving priors).

19
ML Steepest Descent - Blur 2
SNR24.2 dB
0
1
2

5
10
20
20
MAP Steepest Descent - Blur 2
Generalized Gibbs prior
0
1
2

5
10
20
21
NMSE vs. Iterations

DSL
22
Noise variance vs. Iterations

DSL
Write a Comment
User Comments (0)
About PowerShow.com