Title: Impact Evaluation
1Impact Evaluation
2Methods
- Randomized Trials
- Regression Discontinuity
- Matching
- Difference in Differences
3The Goal
- Causality
- We did program X, and because of it, Y happened.
4The Goal
- Causal Inference
- Y happened because of X, not for some other
reason. Thus it makes sense to think that if we
did X again in a similar setting, Y would happen
again.
5Getting to Causality
- In a more research-friendly universe, wed be
able to observe a single person (call him Fred)
after we both gave and didnt give him the
treatment. - Ytreated Fred-Yuntreated Fred
6Getting to Causality
- In the reality-based community,
- finding this Ytreated Fred-Yuntreated Fred
- counterfactual is impossible.
- Is the solution to get more people?
7Getting to Causality
- With more people, we can calculate
- Average (treated)-Average(untreated).
- But what if theres an underlying difference
between the treated and untreated?
8Getting to Causality
- Confounding Factors/Selection Bias/Omitted
Variable Bias - Textbook Example
- If textbooks were deliberately given to the most
needy schools, the simple difference is
incorrect. - If textbooks were already present in the schools
where parents cared a lot about education, the
simple difference is incorrect.
9Problem Solved
- If we randomize the treatment, on average,
treatment and control groups should be the same
in all respects, and there wont be selection
bias. - Check that its true for all observables.
- Hope that its therefore true for all
unobservables.
10Math Youd Rather Not See
- See Clairs slides from September 15
- -omitted variable bias
- Very accessible reading from same week by Duflo,
Glennerster Kremer. - -selection bias
11Randomization
- Randomize who gets treated.
- Check if it came out OK.
- Basically, thats it.
12Randomization
- Examples
- Progresa-Cash if kids go to school
- Moving to Opportunity-voucher to move to better
neighborhood - Fertilizer Hybrid Seed
- Loan maturity Interest rate
- Deworming
13Regression Discontinuity
- Being involved in a program is clearly not
random. - Smarter kids get get scholarships.
- Kids in smaller classes learn better.
- Big firms are more likely to unionize.
-
14Regression Discontinuity
- Being involved in a program is clearly not
random. - Or is it?
- Scholarship cutoff 1 girl vs. scholarship
cutoff-1 girl - Isreali 41 kid school vs. Isreali 40 kid school
- Union-yes 501 school vs. Union-yes 50 -1 school
15Regression Discontinuity
- Being involved in a program is clearly not
random. - Or is it?
- Scholarship cutoff 1 girl vs. scholarship
cutoff-1 girl - Isreali 41 kid school vs. Isreali 40 kid school
- Union-yes 501 school vs. Union-yes 50 -1 school
16So how do we actually do this?
- Draw two pretty pictures
- Eligibility criterion (test score, income, or
whatever) vs. Program Enrollment - Eligibility criterion vs. Outcome
17So how do we actually do this?
2. Run a simple regression. (Yes, this is
basically all we ever do, and the stats programs
we use can run the calculation in almost any
situation, but before we do it, its necessary to
make sure the situation is appropriate and draw
the graphs so that we can have confidence that
our estimates are actually causal.) Outcome as
a function of test score (or whatever), with a
binary (1 if yes, 0 if no) variable for program
enrollment.
18As Good As Random, Sort Of
- Randomize who gets treated (within a bandwidth).
- Check if it came out OK (within a bandwidth).
-
(within a bandwidth) - Basically, thats it (within a bandwidth).
19Difference in Differences
- Change for the treated - Change for the control
- (t1-t0)-(c1-c0)
- t1-t0-c1c0
- t1-c1-t0c0
- t1-c1-(t0-c0)
- Which is the same as
20(No Transcript)
21Examples
- Malaria
- Bleakley, Hoyt. Malaria Eradication in the
Americas A Retrospective Analysis of Childhood
Exposure. Working paper. - Land Reform
- Besley, Timothy and Robin Burgess. Land Reform,
Poverty Reduction, and Growth Evidence from
India. Quarterly Journal of Economics. May 2000,
389-430.
22Matching
- Match each treated participant to one or more
untreated participant based on observable
characteristics. - Assumes no selection on unobservables
- Condense all observables into one propensity
score, match on that score.
23Matching
- After matching treated to most similar untreated,
subtract the means, calculate average difference
24Matching
- Examples
- Does piped water reduce diarrhea?
- Jalan, Jyotsna and Martin Ravallion. Does Piped
Water Reduce Diarrhea for Children in Rural
India? Journal of Econometrics. January 2003,
153-173. - Anti-poverty program in Argentina
- Jalan, Jyotsna and Martin Ravallion. Estimating
the Benefit Incidence of an Antipoverty Program
by Propensity Score Matching. Journal of Business
and Economic Statistics. January 2003, 19-30.
25Matching
- Matching algorithm can be performed in many ways.
- Guido Imbens webpage
- http//elsa.berkeley.edu/imbens/estimators.shtml
26Summary
- The weakest (easiest) assumption is the best
assumption. - Randomization wins.
- Real scientists use it too.
27Proof by One Example
- LaLonde, Robert. Evaluating the Econometric
Evaluations of Training Programs with
Experimental Data. American Economic Review,
September 1986. - Run a randomization and analyze it well. Then
pretend you dont have all the data that you do,
construct fake comparison groups using the
census, and show that none of your crazy methods
get you right answer.