Title: Single Variable, Independentgroups Designs
1Single Variable,Independent-groups Designs
2Experimental Designs
- Test one or more hypotheses about causal effects
of the independent variable(s) (IV) - Include at least two levels of the IV
- Randomly assign participants to conditions
- Include specific procedures for testing
hypotheses - Include control for the major threats to internal
validity
3Single variable, independent-groups designs
- Randomized, posttest-only, control-group design
- Randomized, pretest-posttest, control-group
design - Multilevel, completely randomized,
between-subjects designs - Solomons four-group designs
4Randomized, Posttest-Only,Control-Group Design
- DesignR Group A Treatment
PosttestR Group B No Treatment Posttest - Key element is random assignment to groups!
- Random assignment controls for selection
- Other confounding variables are controlled by
comparing the treatment and no treatment groups
5Randomized, Pretest-Posttest,Control-Group Design
- DesignR Group A Pretest Treatment
PosttestR Group B Pretest No
Treatment Posttest - Adding a pretest allows us to quantify the amount
of change following treatment - Also allows us to verify that the groups were
equal initially - A strong basic research design, with excellent
control over confounding
6Multilevel, Randomized,Between-Subjects Design
- DesignR Group 1 Pretest Treatment 1
PosttestR Group 2 Pretest
Treatment 2 Posttest - R Group N Pretest Treatment N
Posttest - May or may not include a pretest
- Multi-group extension of the basic experimental
design
7Solomons Four-Group Design
- DesignR Group A Pretest Treatment
PosttestR Group B Pretest No
Treatment PosttestR Group C
Treatment PosttestR Group D
No Treatment Posttest - Combines two basic experimental designs
- Allows us to assess whether there is an
interaction between the treatment and the pretest
8Statistical Analysis Issues
- If the data are nominal, use chi-square
- If the data are ordinal, use the Mann-Whitney
U-test - If the data are interval or ratio
- If there are only two groups, a t-test of the
posttest measures will test the hypothesis - More complex designs will require an ANOVA
9Data analysis for a two-group experiment
- One population, two samples
- Random error
- Affects individual scores
- May cause group means to differ
10Data analysis for a two-group experiment
- Within group variability due to random error
- Between group variability due to random error
treatment effect
11Data analysis for a two-group experiment
- t-test is the difference between two groups
larger than would be expected by random error
alone? - Group 1 mean Group 2 mean
- Standard error of the difference
t
12Data analysis for a multi-group experiment
- Within-groups (error) variance
- Between-groups (treatment) variance
13Data analysis for a multi-group experiment
- F MSB MS Treatment MST
- MSW MS Error MSE
- F Random Error Possible Treatment Effect
- Random Error
14Example
- Effect of alcohol on hand-eye coordination
- Three levels of IV
- 0 shots 3 shots 6 shots
- DV number of errors on simulated driving task
15Possible outcomes
- A.
- 0 shots 3 mistakes
- 3 shots 10 mistakes
- 6 shots 10 mistakes
- B.
- 0 shots 3 mistakes
- 3 shots 3 mistakes
- 6 shots 10 mistakes
- C.
- 0 shots 3 mistakes
- 3 shots 6 mistakes
- 6 shots 9 mistakes
Each of these could provide a significant
over-all F-test
16Specific Means Comparisons
- A significant F-test means that at least one
group is significantly different from at least
one other group - If you have more than two groups, you have to do
follow-up tests (contrasts) to see which groups
differ