Chapter 8: Heteroskedasticity - PowerPoint PPT Presentation

1 / 38
About This Presentation
Title:

Chapter 8: Heteroskedasticity

Description:

So var(ui) = 2 a constant. Suppose the variance varies across observations ... Plot residuals against X and find a trumpet shape ... – PowerPoint PPT presentation

Number of Views:560
Avg rating:3.0/5.0
Slides: 39
Provided by: davidmac
Category:

less

Transcript and Presenter's Notes

Title: Chapter 8: Heteroskedasticity


1
Chapter 8 Heteroskedasticity
2
  • 1. Introduction

3
Introduction
  • A CLRM assumption that the disturbances ui are
    homoscedastic
  • They all have the same variance.
  • So var(ui) ?2 a constant
  • Suppose the variance varies across observations
  • Then we have heteroskedasticity.
  • So var(ui) (?i)2 which varies with each
    observation

4
Example
  • Average wages rise with the size of firm. Suppose
    wages look like this

5
Example
  • Can we expect the variance of wages to be
    constant?
  • The variance increases as firm size increases.
  • So larger firms pay more on average, but there is
    more variability in their wages.

6
Savings Example
  • Savings increases with income, so does the
    variability of savings or spending
  • As incomes grow, people have more discretionary
    income, so more scope for choice about how to
    dispose of it.

7
Overall
  • Heteroskedasticity is more likely in cross
    sectional than time-series data.

8
  • 2. Consequences of
  • Heteroskedasticity

9
Consequences
  • If we have heteroskedasticity , what happens to
    our estimator?
  • Still linear
  • Still unbiased
  • Not the most efficient - it does not have minimum
    variance.
  • So it is not BLUE.

10
Consequences
  • If we use usual variance formulas, they will be
    biased
  • This is because the estimator is not an unbiased
    estimator of ?2
  • So F tests and t tests are unreliable

11
  • 3. Detecting Heteroskedasticity

12
Detecting Heteroskedasticity
  • Past research indicates it
  • Know that scale effects will exist
  • Ex spending patterns in relation to income
  • Firm profitability or investment spending in
    relation to the size of the firm.

13
Detecting Heteroskedasticity
  • Examine residuals
  • Assume no heteroskedasticity and run OLS and then
    look at estimated residuals
  • In 2-variable model
  • Plot squared residuals against the independent
    variable.
  • Plain residual has no correlation with X or Y

14
Detecting Heteroskedasticity
  • In multivariate model, do against different Xs,
    or against the predicted value of Y.
  • Predicted Y is a linear combination of the Xs.
  • Graph could show linear or quadratic relationship
  • plots provide clues as to the nature of the
    heteroskedasticity and how we might transform
    variables.

15
  • 4. Park Test

16
Park Test
  • If we find some evidence of heteroskedasticity by
    looking at the residuals we can do an explicit
    test.
  • Regress the variance on the X variables.
  • Ln(?i)2b1 b2lnXi vi

17
Park Test
  • Dont know the variance
  • Use squared residuals as a proxy
  • Run ln(ei)2b1 b2lnXi vi
  • For a multivariate model, run the squared
    residuals against each X variable, or against the
    predicted Y.
  • If b2 is significantly different from 0, then we
    have heteroskedasticity.

18
  • 4. Glejser Test

19
Glejser test
  • Similar test to the Park test.
  • Regress the absolute values of ei on X.
  • The form of the regression may vary
  • Can run on square root of X or 1/X etc.
  • If significant ts, then heteroskedasticity

20
  • 5. Goldfeld-Quandt Test

21
Goldfeld-Quandt test
  • Order the observations according to the magnitude
    of the X thought to be related.
  • Divide observations into two groups, one with low
    values of X and one with high, omitting some
    central observations.

22
Goldfeld-Quandt test
  • Run two separate regressions
  • Calculate F test
  • FESSlarge X/df/ESSsmall X/df
  • Should be unity for homoskedasticity.

23
  • 6. Remedial Measures When ? is Known

24
Remedies
  • OLS estimators are not efficient under
    heteroskedasticity, though they are unbiased and
    consistent.
  • We can transform the model to get rid of the
    heteroskedasticity.
  • If we know ?2 then use the method of weighted
    least squares.

25
Weighted Least Squares
  • Suppose have heteroskedasticity as in the firm
    data example
  • Wages increase with size of firm, but also the
    variance increases.
  • To correct for heteroskedasticity Give less
    weight to data points from populations with
    greater variability and more weight to those that
    have smaller variability.

26
Weighted Least Squares
  • OLS gives equal weight to all observations.
  • Weighting observations is called weighted least
    squares
  • A subset of generalized least squares
  • Using this method leads to BLUE estimators.

27
Weighted Least Squares
  • Start with basic model
  • Yi b1 b2Xi ui
  • Y wages and X firm size
  • Assume the true error variance (?i)2 is known for
    each observation.
  • Divide through by the standard deviation
  • Y/ ?i b1(1/ ?i) b2Xi/ ?i ui/?i

28
Weighted Least Squares
  • Look at the error term.
  • Let vi ui/?i
  • If is vi homoskedastic then OLS on this model
    will give us BLUE estimators
  • Square vi
  • (vi)2 (ui)2/(?i)2
  • E(vi)2 E(ui)2/(?i)2

29
Weighted Least Squares
  • Since (?i)2 is known, E(vi)2 becomes 1/(?i)2
    E(ui)2
  • But E(ui)2 (?i)2
  • So this becomes 1/(?i)2 (?i)2 1
  • So the transformed error terms is heteroskedastic
  • Estimate this model by OLS to get BLUE
    estimators.
  • So WLS is OLS on the transformed variables.

30
Weighted Least Squares
  • In OLS, minimize
  • (ei)2 (Y- b1 - b2X2)2
  • In GLS, minimize
  • w(Y- b1 - b2X2)2
  • where w 1/?i

31
Weighted Least Squares
  • In GLS we minimize a weighted sum of squares
  • The weights we are using are inversely
    proportional to ?i
  • observations from a population with large
    variance will get smaller weights and vice versa.

32
Weighted Least Squares
  • OLS minimizes the sum of squared residuals
  • But these residuals are very large for the
    observations with a large variance, so it is
    giving these more weight.
  • GLS corrects for this

33
  • 7. Remedial Measures When ? is not Known

34
Remedies
  • We have to make assumptions about (?i)2 if we
    dont know it.
  • Plot residuals against X and find a cone shape
  • This indicates that the error variance is
    linearly related to X.
  • Now transform the model by dividing Y, the
    intercept, X and the error term by square root of
    X.

35
Remedies
It can be proved that the error variance in this
model is homoskedastic and we can estimate by OLS
(actually its a form of GLS).
36
Remedies
  • Plot residuals against X and find a trumpet shape
  • This indicates that the error variance increases
    proportional to the square of X.
  • Now transform the model by dividing Y, the
    intercept, X and the error term by X.

37
Remedies
Slope has become intercept and intercept becomes
slope. But this changes back when we multiply
out by X
38
Remedies
  • Log transformation of both sides
  • Log transformation compresses the scales in which
    the variables are measured
  • This reduces the differences.
  • Cannot do this if some Y and X values are zero or
    negative.
Write a Comment
User Comments (0)
About PowerShow.com