Scott Morris - PowerPoint PPT Presentation

About This Presentation
Title:

Scott Morris

Description:

Title: TopoFusion GPS-based Network Generation and Simulation Author: Ted Last modified by: Scott Morris Created Date: 12/1/2002 8:50:54 PM Document presentation format – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 12
Provided by: ted
Category:
Tags: morris | network | scott

less

Transcript and Presenter's Notes

Title: Scott Morris


1
  • Scott Morris
  • Department of Computer Science

2
Basic Premise
  • Given a set of observed data, X, what is the
    underlying model that produced X?
  • Example distributions Gaussian, Poisson,
    Uniform
  • Assume we know (or can intuit) what type of model
    produced data
  • Model has m parameters (T1..Tm)
  • Parameters are unknown, we would like to estimate
    them

3
Maximum Likelihood Estimators (MLE)
  • P(TX) Probability that a set of given
    parameters are correct ??
  • Instead define likelihood of the parameters
    given the data, L(TX)
  • What if data is continuous?

4
MLE continued
  • We are solving an optimization problem
  • Often solve log() of Likelihood instead.
  • Why is this the same?
  • Any method that maximizes the likelihood function
    is called a Maximum Likelihood Estimator

5
Simple Example Least Squares Fit
  • Input N points in R2
  • Model A single line, y axb
  • Parameters a, b
  • Origin? Maximum Likelihood Estimator

6
Expectation Maximization
  • An elaborate technique for maximizing the
    likelihood function
  • Often used when observed data is incomplete
  • Due to problems in observation process
  • Due to unknown or difficult distribution
    function(s)
  • Iterative Process
  • Still a local technique

7
EM likelihood function
  • Observed data X, assume missing data Y.
  • Let Z be the complete data
  • Joint density function
  • P(zT) p(x,yT) p(yx,T)p(xT)
  • Define new likelihood function L(TZ) p(X,YT)
  • X,T are constants, so L() is a random variable
    dependent on the random variable Y.

8
E Step of EM Algorithm
  • Since L(TZ) is itself a random variable, we can
    compute its expected value
  • Can be thought of as computing the expected value
    of Y given the current estimate of T.

9
M step of EM Algorithm
  • Once we have expectation computed, optimize T
    using the MLE.
  • Convergence Various results proving convergence
    cited.
  • Generalized EM Instead of finding optimal T,
    choose one that increases the MLE

10
Mixture Models
  • Assume mixture of probability distributions
  • Log-likelihood function is difficult to optimize,
    use a trick
  • Assume unobserved data items Y whose values
    inform us which distribution generated each item
    in X.

11
Update Equations
  • After much derivation, estimates for new
    parameters in terms of old result
  • T (µ,S)
  • Where µ is the mean and S is the variance of a
    d-dimensional normal distribution
Write a Comment
User Comments (0)
About PowerShow.com