Distribution of Estimates and Multivariate Regression - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Distribution of Estimates and Multivariate Regression

Description:

... Note that ee =s2I therefore Theorem 12.2.1 ... Focusing on the last terms, we note that by the orthogonality conditions for the C matrix Focusing on the ... – PowerPoint PPT presentation

Number of Views:74
Avg rating:3.0/5.0
Slides: 26
Provided by: ValuedGa610
Category:

less

Transcript and Presenter's Notes

Title: Distribution of Estimates and Multivariate Regression


1
Distribution of Estimates and Multivariate
Regression
  • Lecture XXIX

2
Models and Distributional Assumptions
  • The conditional normal model assumes that the
    observed random variables are distributed
  • Thus, Eyixiabxi and the variance of yi
    equals s2. The conditional normal can be
    expressed as

3
  • Further, the ei are independently and identically
    distributed (consistent with our BLUE proof).
  • Given this formulation, the likelihood function
    for the simple linear model can be written

4
  • Taking the log of this likelihood function
    yields
  • As discussed in Lecture XVII, this likelihood
    function can be concentrated in such a way so that

5
  • So that the least squares estimator are also
    maximum likelihood estimators if the error terms
    are normal.
  • Proof of the variance of b can be derived from
    the Gauss-Markov results. Note from last
    lecture

6
  • Remember that the objective function of the
    minimization problem that we solved to get the
    results was the variance of estimate

7
  • This assumes that the errors are independently
    distributed. Thus, substituting the final result
    for di into this expression yields

8
Multivariate Regression Models
  • In general, the multivariate relationship can be
    written in matrix form as

9
  • If we expand the system to three observations,
    this system becomes

10
  • Expanding the exactly identified model, we get

11
  • In matrix form this can be expressed as
  • The sum of squared errors can then be written as

12
  • A little matrix calculus is a dangerous thing
  • Note that each term on the left hand side is a
    scalar. Since the transpose of a scalar is
    itself, the left hand side can be rewritten as

13
(No Transcript)
14
Variance of the estimated parameters
  • The variance of the parameter matrix can be
    written as

15
(No Transcript)
16
  • Substituting this back into the variance
    relationship yields

17
  • Note that ees2I therefore

18
  • Theorem 12.2.1 (Gauss-Markov) Let bCy where C
    is a T x K constant matrix such that CXI.
    Then, bis better than b if b ? b.

19
  • This choice of C guarantees that the estimator b
    is an unbiased estimator of b. The variance of
    b can then be written as

20
  • To complete the proof, we want to add a special
    form of zero. Specifically, we want to add
    s2(XX)-1-s2(XX)-10.

21
  • Focusing on the last terms, we note that by the
    orthogonality conditions for the C matrix

22
  • Focusing on the last terms, we note that by the
    orthogonality conditions for the C matrix

23
  • Substituting backwards

24
  • Thus,

25
  • The minimum variance estimator is then CX(XX)-1
    which is the ordinary least squares estimator.
Write a Comment
User Comments (0)
About PowerShow.com