Topic 11: Matrix Approach to Linear Regression - PowerPoint PPT Presentation

1 / 23
About This Presentation
Title:

Topic 11: Matrix Approach to Linear Regression

Description:

... specify that the covariance between different ?i is zero ... use this framework to do multiple regression where we have more than one explanatory variable ... – PowerPoint PPT presentation

Number of Views:402
Avg rating:3.0/5.0
Slides: 24
Provided by: georgep56
Category:

less

Transcript and Presenter's Notes

Title: Topic 11: Matrix Approach to Linear Regression


1
Topic 11 Matrix Approach to Linear Regression
2
Outline
  • Linear Regression in Matrix Form

3
The Model in Scalar Form
  • Yi ß0 ß1Xi ?i
  • The ?i are independent normally distributed
    random variables with mean 0 and variance s2
  • Consider writing the observations
  • Y1 ß0 ß1X1 ?1
  • Y2 ß0 ß1X2 ?2
  • Yn ß0 ß1Xn ?n

4
The Model in Matrix Form

5
The Model in Matrix Form II

6
The Design Matrix
7
Vector of Parameters

8
Vector of error terms
9
Vector of responses
10
Simple Linear Regression in Matrix Form
11
Variance-Covariance Matrix

Main diagonal values are the variances and
off-diagonal values are the covariances.
12
Covariance Matrix of ?
Independent errors means that the covariance of
any two residuals is zero. Common variance
implies the main diagonal values are equal.
13
Covariance Matrix of Y
14
Distributional Assumptions in Matrix Form
  • ? N(0, s2I)
  • I is an n x n identity matrix
  • Ones in the diagonal elements specify that the
    variance of each ?i is 1 times s2
  • Zeros in the off-diagonal elements specify that
    the covariance between different ?i is zero
  • This implies that the correlations are zero

15
Least Squares
  • We want to minimize (Y-Xß)?(Y-Xß)
  • We take the derivative with respect to the
    (vector) ß
  • This is like a quadratic function
  • Recall the function we minimized using the
    scalar form

16
Least Squares
  • The derivative is 2 times the derivative of
    (Y-Xß)? with respect to ß
  • In other words, 2X?(Y-Xß)
  • We set this equal to 0 (a vector of zeros)
  • So, 2X?(Y-Xß) 0
  • Or, X?Y X?Xß (the normal equations)

17
Normal Equations
  • X?Y (X?X)ß
  • Solving for ß gives the least squares solution b
    (b0, b1)?
  • b (X?X)1(X?Y)
  • See NKNW p 200 for details
  • The same approach works for multiple
    regression!!!!!!

18
Fitted Values
19
Hat Matrix
Well use this matrix when assessing diagnostics
in multiple regression
20
Estimated Covariance Matrix of b
  • This matrix, b, is a linear combination of the
    elements of Y
  • These estimates are normal if Y is normal
  • These estimates will be approximately normal in
    general

21
A Useful MultivariateTheorem
  • U N(µ, S), a multivariate normal vector
  • V c DU, a linear transformation of U
  • c is a vector and D is a matrix
  • Then V N(cDµ, DSD?)

22
Application to b
  • b (X?X)1(X?Y) ((X?X)1X?)Y
  • Since Y N(Xß, s2I) this means the vector b is
    normally distributed with mean (X?X)1X?Xß ß
    and covariance
  • s2 ((X?X)1X?) I((X?X)1X?)? s2 (X?X)1

23
Background Reading
  • We will use this framework to do multiple
    regression where we have more than one
    explanatory variable
  • Another explanatory variable is comparable to
    adding another column in the design matrix
  • See Chapter 6
Write a Comment
User Comments (0)
About PowerShow.com