Title: Group Experimental Research Designs I
1Group ExperimentalResearch Designs I II
2To consult the statistician after an experiment
is finished is often merely to ask him to conduct
a post mortem examination. He can perhaps say
what the experiment died of.Ronald
FisherEvolutionary biologist,geneticist, and
statistician
3The Need for Experiments
- Purpose To establish a strong argument for a
cause-and-effect relationship between two
variables. More specifically, that a change in
one variable directly causes a change in another. - Characteristics
- Direct manipulation of the independent
variable. - Control of extraneous variables.
4The First Clinical Trial 1747
- Sailors deprived of fresh foods get scurvy
- - Weak, depressed, brown spots, bleeding gums
- James Linds theory Putrefaction preventable
by acids such as vinegar - His tested six treatmentsoranges and lemons
workedfresh, not boiled/bottled - We know its actually Vitamin C deficiency
- - Vitamin C wont be discovered for 150 years
5Forms of Validity
- Validity How meaningful, useful, and
appropriate our conclusions are. - It is not a characteristic of a test per se,
but rather our use of the results of the test. - Internal Validity The extent to which the
independent variable, and not other extraneous
variables, produce the observed change in the
dependent variable. - External Validity The extent to which the
results of a study can be generalized to other
subjects, settings, and time.
6Experimental Design Notation
- R Random selection or assignment
- O Observation (often a test)
- X Experimental treatment
- Control treatment
- A, B Treatment groups
7Weak Experimental Designs
- Single group, posttest only
- A X O
- Single group, pretest/posttest
- A O X O
- Non-equivalent groups posttest only
- A X O
- B O
8Strong Experimental Design
- Randomized Pretest-Posttest
- Control Group Design
O
O
X
Experimental Group
Pre
Post
Random Assignment of Subjects
O
O
Control Group
Pre
Post
9Strong Experimental Design
- Why do we use a control group?
- To help reduce threats to internal validity.
This is not required of experiments, but is very
important.
O
O
X
Experimental Group
Pre
Post
Random Assignment of Subjects
O
O
Control Group
Pre
Post
10Strong Experimental Design
- Why do we randomly assign subjects?
- To help ensure equivalence between the two
groupson the dependent measures as well as all
others.
O
O
X
Experimental Group
Pre
Post
Random Assignment of Subjects
O
O
Control Group
Pre
Post
11Strong Experimental Design
- Why do we use a pretest?
- To test for equivalence of the groups at the
start. - For baseline data to calculate pretest/posttest
delta.
O
O
X
Experimental Group
Pre
Post
Random Assignment of Subjects
O
O
Control Group
Pre
Post
12Strong Experimental Design
- What treatments do the subjects get?
- The experimental group gets the treatment, of
course. - The control group gets something unrelated to
the DV.
O
O
X
Experimental Group
Pre
Post
Random Assignment of Subjects
O
O
Control Group
Pre
Post
13Strong Experimental Design
- Why do we use a posttest?
- To measure the delta between the pretest and
posttest. - To measure the delta between groups on the
posttest.
O
O
X
Experimental Group
Pre
Post
Random Assignment of Subjects
O
O
Control Group
Pre
Post
14Strong Experimental Design
- Bonus Include a delayed retention test.
- To determine whether the effects are lasting or
whether they fade quickly.
O
O
X
O
Experimental Group
Pre
Post
Post 2
Random Assignment of Subjects
O
O
O
Control Group
Pre
Post
Post 2
15Experiment-Specific Information
- Who are the subjects? (Selection)
- Representative of the population of interest
- This relates to Threats to External Validity
- What is the dependent variable?
- How is it operationalized/measured?
- Be specific. Can it be put on a number line?
- What are the treatments?
- - What does the experimental group get?
- - What does the treatment group get?
16Threats toInternal Validity
17Objective Evidence of Cause Effect
- You claim that the difference between the Control
Group and Experimental Group posttest scores is
the result of your treatment others will argue
that it was actually due to some other cause.
Because of this!
Because of that!
18The Classic Counter-Argument
- Isnt it possible that the difference in
outcomes you saw between the control group and
the experimental group was not a result of the
treatment, but rather was the result of ____?
19Threats to Internal Validity
- 1. History (Coincidental Events)
- 2. Experimental Mortality (Attrition)
- 3. Statistical Regression to the Mean
- 4. Maturation
- 5. Instrumentation
- 6. Testing
- 7. Selection (really Assignment)
- 8. Diffusion
- 9. Compensatory Rivalry
- 10. Compensatory Equalization
- 11. Demoralization
HERMITS DRED
20History (Coincidental Events)
- Events outside the experimental treatments that
occur at the time of the study that impact the
groups differently. - Example CA/NY test anxiety study
- Strategies
- Use a control group.
- Limit the duration of the study.
- Use groups that are close in time, space, etc.
- Plan carefully. (What else is going on?)
21Experimental Mortality (Attrition)
- When subjects drop out during the course of the
studyand those that drop out are different in
some important way from those that remain. - Example Van Schaack dissertation at FRA
- Strategies
- Use a control group.
- Set clear expectations and get commitment.
- Keep the study short and relatively painless.
- Explain how those who dropped out are not
different.
22Statistical Regression (to the Mean)
- When subjects are chosen to participate in an
experiment because of their extreme scores on a
test (high or low), they are likely to score
closer to the mean on a retest. - Example Rewards fails punishment works
- Strategies
- Use a control group.
- Consider the first test a Selection Test and
then give the selected group a new pretest. - Use the most reliable test possible.
23Maturation
- Subjects naturally mature (physically,
cognitively, or emotionally) during the course of
an experiment especially long experiments. - Example Run Fast! training program
- Strategies
- Use a control group.
- Keep the study as short as possible.
- Investigate beforehand the anticipated effects
of maturation. (What natural changes can you
expect?)
24Instrumentation
- Differences between the pretest and posttest
may be the result of a lack of reliability in
measurement. - Example Fatigue and practice effects
- Strategies
- Use a control group.
- Increase the reliability of your observations.
(See the next slide for specific strategies.)
25Increase Reliability of Observations by
- Targeting specific behaviors
- Using low inference measures
- Using multiple observers
- Training the observers
- Keeping the observers blind to conditions
- Striving for inter-rater reliability
26Testing
- A subjects test score may change not as a
result of the treatment, but rather as a result
of become test-wise. - Example SAT test prep courses
- Strategies
- Use a control group.
- Use a non-reactive test (one that is difficult
to get good at through simple practice). - Conduct only a few tests spaced far apart in
time (pre, post, delayed post).
27Selection (should be Assignment)
- When subjects are not randomly assigned to
conditions, the differences in outcomes may be
the result of differences that existed at the
beginning of the study. - Example Algebra software experiment
- Strategies
- Avoid intact groupsrandomly assign subjects.
- Conduct a pretest to ensure equivalence of
groups. - If there are differences, assign the group that
did better to the control conditionprovide an
advantage.
28Diffusion
- Members of the control group may receive some
of the treatment by accident. - Example Red Bull and motivation
- Strategies
- Keep the two groups separate.
- Ask participants to keep quiet about the
experiment. - Make it difficult for participants to
intentionally or accidentally share the treatment.
29Compensatory Rivalry (John Henry)
- The group that does not receive the treatment
may feel disadvantaged and work extra hard to
show that they can perform as well as the
treatment group. - Example The Bad News Bears
- Strategies
- Keep the two groups separate.
- Ask participants to keep quiet about the
experiment. - Give the control group a meaningful experience
unrelated to the dependent variable.
30Compensatory Equalization
- Someone close to the experiment may feel that
the control group is being cheated and should
receive something to make up for the lack of
treatment. - Example Empathetic teacher/physician
- Strategies
- Educate the team about the importance of the
study. - Monitor treatment fidelity.
- Give the control group a meaningful experience
unrelated to the dependent variable.
31Demoralization
- The opposite of Compensatory Rivalrythe
control group is demoralized because they were
not chosen to receive the treatment, and as a
result, give up. - Example _________
- Strategies
- Keep the two groups separate.
- Ask participants to keep quiet about the
experiment. - Give the control group a meaningful experience
unrelated to the dependent variable.
32Bonus Sampling Fluctuation
- The difference in outcomes observed was not a
result of the treatment, but rather was the
result of sampling fluctuation the normal
variability seen when sampling. - Example Every experiment with 2 groups
- Strategies
- Conduct a test of statistical significance.
Determine the likelihood that the differences
observed were due to sampling fluctuation alone. - Make sure to use large sample sizes.