Title: Evaluation Research
1Chapter 12
2Pure versus Applied research
- Pure research research conducted solely for the
purpose of advancing social scientific knowledge
"basic" research. - Example National Science FoundationResearch
examples SBE Science Nuggets
3- Applied research research undertaken for the
purpose of influencing some phenomenon in the
"real" world research conducted for the purpose
of putting the results into practice - Examples of Applied Research from the Institute
for Research on Poverty (Click on "Research" for
examples) - Examples from the Institute for Womens Policy
Research (Click on "Our Research" for examples)
4Evaluation research
- Definition in Babbie - Research that attempts to
determine whether a social intervention has
produced its intended result. - Social intervention - an action taken within a
social context for the purpose of producing some
intended result. - This is really a definition of a specific kind of
evaluation research - outcome evaluation research
- which makes the text material somewhat limited
5Two basic types of evaluation research
- 1. Formative evaluation (a.k.a. Process
Evaluation) - provides information for
intervention improvement, modification,
documentation and management in the process of
intervention implementation. - Goal is to strengthen the intervention by
providing feedback on its implementation and
progress.
6- 2. Outcome evaluation (a.k.a. Impact or Summative
Evaluation) - measures the extent to which the
interventions stated goals and objectives were
achieved and determines any unintended
consequences of the intervention and whether
these were positive or negative. - Important for making major decisions about
intervention continuation, expansion, reduction,
and funding.
7Types of formative evaluation
- Formative evaluation projects differ according to
their goals - a. Needs assessment
- b. Intervention monitoring
- c. Context evaluation
8a. Needs assessment
- Determines whether there are demands for new
services or gaps in already established services
that need to be met. - Importance of needs assessment for establishing
goals, objectives, intervention structure and
activities, and resource requirements.
9b. Intervention monitoring
- Tracks the process of intervention delivery
(e.g., what services are delivered, how many
clients are served, what are client
characteristics?) - Example from the VCU SERL Ryan White Title II
Data Reporting
10c. Context evaluation
- Provides information about the setting or
environment in which the intervention is
implemented. Assesses how certain settings
contribute to or impede intervention success. - Important considerations specific needs of
individuals targeted by the intervention social,
political, economic, geographic, and/or cultural
factors. - VCU SERL example - TANF ("Temporary Assistance
for Needy Families") evaluation of programs
designed to help recipients get and keep jobs.
Effects of economic and bureaucratic contexts. - (Context evaluation is also important in outcome
evaluations - see below.)
11Back to two types of evaluation research
- 1. Formative (process) evaluation - provides
information for intervention improvement,
modification, documentation and management in the
process of intervention implementation. - 2. Outcome (impact/summative) evaluation -
measures the extent to which the interventions
stated goals and objectives were achieved and
determines any unintended consequences of the
intervention and whether these were positive or
negative.
12Outcome evaluation
- Important for making major decisions about
intervention continuation, expansion, reduction,
and funding. - Babbie discusses only outcome evaluation
13Examples of outcome evaluation research
- From an evaluation researcher at the State of
Florida Juvenile Justice Department, a look at
Juvenile Justice Evaluation Research. - Example from Virginia government Joint
Legislative Audit and Review Commission - (1) Monitors whether state agencies and programs
are in compliance with legislative intent
concerning appropriations and objectives, and - (2) Determines whether state agencies and
programs meet criteria of economy, efficiency,
and effectiveness.
14Measurement issues in outcome evaluation
- 1. Specification of the outcome variable is
critical in outcome evaluation. - 2. The characteristics of the intervention itself
must be measured. - 3. The experimental context should be considered.
151. Specification of outcome variable
- Examples of measurement issues
- In a program designed to reduce illegal drug use
among teenagers - Exactly which drug(s) should be included?
- How should drug use be measured? Over what time
period? What quantity? With what frequency of
use? - How much of a reduction would the program have to
accomplish to be considered a success?
16Measuring success
- Those responsible for the intervention may commit
themselves in advance to a measurable outcome
that will be regarded as an indication of
success. - The intervention may be amenable to cost/benefit
analysis (how much does the program cost in
relation to what it returns in benefits?).
Example from VCU SERL ADAP (AIDS Drug
Assistance Program) - Evaluators can examine the outcome performance of
competing programs and compare them.
172. Measuring the characteristics of the
intervention
- Examples
- program delivery
- dates
- content
- personnel
- number of programs
- extent or quality of program participation
- comparison of program delivery/participation to
program goals
183. Considering the experimental context
- What is happening outside the intervention that
could affect its effectiveness? - Examples
- economy
- political situation
- cultural context
- bureaucratic impediment/facilitation
19Classical experimental design
The gold standard in determining the effect of an
experimental intervention/stimulus
(R random assignment of subjects from a pool O
observation of the dependent variable X
administration of the experimental stimulus) Can
this design be applied to the real world of
evaluation research?
20Types of quasi-experimental designs
- 1. Time-series design observation of events
before, during, and after the intervention. - 2. Nonequivalent control group design instead
of a randomized control group, a control group as
comparable as possible to the experimental group
is selected and observed. - 3. Multiple time-series design combination of
1 and 2 a time-series design involving the
observation of one or more comparable control
groups.
211. Time-series design - observation of events
before, during, and after the intervention
221. Time-series design considerations
- Preferable to collect as many data points as
possible before and after intervention - Data collection instrument should remain
unchanged - Disadvantage does not control for the
possibility of the effect of variables other than
the intervention (extraneous variables - factors
other than the experimental intervention that
occurred at the same time and affected the
outcome variable).
232. Nonequivalent control group design a control
group as comparable as possible to the
experimental group is selected and observed.
(C attempt to establish comparability O
observation of the dependent variable X
administration of the experimental
stimulus) Disadvantage unless the subjects are
randomized, we cant be confident that the
intervention is the only difference between the
two groups. Strength of the conclusions rests on
the level of comparability of the groups.
243. Multiple Time-Series Design combination of
1 and 2 a time-series design involving the
observation of one or more comparable control
groups.
Preferable to 1 and 2, because it involves the
examination of two or more groups over time.
However, the weaknesses of each may affect this
design also.
25Qualitative methods in evaluation research
- SERL example FIFI "Georgia HIV/AIDS
Comprehensive Needs Assessment" (1998). - Survey of services provided by HIV/AIDS
organizations and perceived needs for services
(quantitative) - Analysis of HIV/AIDS surveillance data and other
relevant existing databases (quantitative) - Focus groups consumers of services and high
risk groups (qualitative) - Key informant interviews views on service
provision and needs among key stakeholders in the
care system (qualitative)
26Why are outcome evaluation research results often
ignored?
- 1. Results may not be presented in a way that
non-researchers (e.g., the program
administrators) can understand. - Example of an SERL report specifically designed
to address this problem. - 2. Results may contradict deeply held beliefs
(e.g., Nixons pornography commission). - 3. Persons with vested interests may act to
prevent implementation of the results.
27Examples of ethical issues in evaluation research
- Who should receive the intervention?
- Is it ethical to deprive a control group of the
intervention? - To what extent should the program administrators
influence the research design? - Should an intervention be evaluated by persons
connected to the organization? - Should the evaluator agree to a contract in which
the results are not disseminated beyond the
organization being evaluated?
28- Is it ethical to evaluate programs that have been
developed without the participation of those who
are affected by them? - Should you, as a professional evaluation
researcher, participate in a project that does
not meet your personal ethics and you feel does
not contribute to the greater good of society? - Web link for evaluation research ethics American
Evaluation Associations "Guiding Principles for
Evaluators."
29Important direction in evaluation research
- Participatory evaluation evaluation that
involves all stakeholders (persons with vested
interests) in the design and implementation of
the project and process of putting results to
use. - The American Evaluation Association has a
Collaborative, Participatory, and Empowerment
Evaluation topical interest group.
30- Developing collaborative roles between
researchers and participants from the Annie E.
Casey Foundation - Participatory research from the World Bank
- Internet resources for participatory action
research