Maximum Likelihood Estimation - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Maximum Likelihood Estimation

Description:

Log Likelihood function (easier for calculations) ... to look wether this is possible. parameters should be identified (estimable) Definition 17.1: ... – PowerPoint PPT presentation

Number of Views:620
Avg rating:3.0/5.0
Slides: 31
Provided by: TatjanaD
Category:

less

Transcript and Presenter's Notes

Title: Maximum Likelihood Estimation


1
Chapter 17
  • Maximum Likelihood Estimation

2
17.2 The Likelihood Function Identification
of the parameters
  • Population Model
  • Y f(y?)
  • Random Sampling
  • Y1 Yn i.i.d. Y
  • joint pdf of the n i.i.d. observations
  • Likelihood Function

3
  • Log Likelihood function (easier for
    calculations)
  • before we go on with estimation of parameters,
    we have
  • to look wether this is possible
  • parameters should be identified (estimable)
  • Definition 17.1
  • The parameter vector ? is identified if for
    any parameter
  • vector, ? ? ?, for some data y, L(?y) ?
    L(?y).

4
17.3 Efficient Estimation The Principle of
Maximum Likelihood
  • To find a MLE, maximize the Likelihood or
    Log-likelihood function with respect to ?
  • or
    and solve for
  • or
  • Examples Poisson (book, p.471), Normal (book,
    p.472),
  • Geometric (not in book)

5
Normal Y N(µ,s²), Y1 Yn i.i.d. Y
6
(No Transcript)
7
Geometric Y Geo(p), Y1 Yn i.i.d. Y,
P(Yyp) p(1-p)y-1
8
17.4 Properties of MLEs
  • Characteristics of the Log-likelihood (
    regularity conditions must hold, def. 17.3 )
  • - for 1 observation i

  • (Score r.v.)

  • (Hessian r.v.)


9
- for whole sample
(likelihood equation)
(information matrix equality)
10
  • consistency
  • - Assume Yf(y?0) Y1 Yn i.i.d. Y (ra.
    sa.)
  • - since is a MLE,
  • - r.v.
  • - Jensen Inequality Eg(x) lt gE(x) if g
    is concave


  • (likelihood inequality)

11
- for any ?, including
( sample mean of n
iid r.v. ) - by Khinchine Thrm. and the
likelihood inequality - but we know,
12
  • asymptotic Normality
  • - by definition,
  • - using second-order Taylor expansion around
    ?0 and MVT
  • - rearranging and multiplying with n1/2
  • -
  • - dividing H and g by n

13
- using CLT
and information matrix inequality
Var(g)-E(H) - we have also
14
  • asymptotic efficiency
  • - Thrm. 17.4 (CRLB)
  • The asymptotic variance of a consistent and
    asymptotically
  • normally distributed estimator of the
    parameter vector ?0
  • will always be at least as large as
  • - asy.Var(MLE) CRLB e.g. MLE has smallest
    cov. Matrix
  • - MLE is consistent
  • - MLE is asymptotically Normal
  • MLE is asymptotically efficient

15
  • Invariance
  • - The MLE of ?0 c(?0) is c(MLE) if c(?0) is
    a continous and
  • continously differentiable function.
  • - The function of a MLE is also a MLE.

16
Example continued Geometric
17
(No Transcript)
18
  • estimating asy.Var(MLE)
  • - often the 2nd derivatives of the
    log-likelihood will be
  • complicated nonlinear functions of the data
    whose
  • expected values will be unknown
  • - 2 alternatives

19
17.5 Three asymptotically equivalent Test
Procedures
  • Likelihood ratio test

20
(No Transcript)
21
  • Wald Test

22
(No Transcript)
23
  • Lagrangean multiplier test

24
(No Transcript)
25
17.6 Applications of ML Estimation
26
(No Transcript)
27
(No Transcript)
28
(No Transcript)
29
(No Transcript)
30

THE END
Write a Comment
User Comments (0)
About PowerShow.com