Title: Random Effects
1Random Effects Repeated Measures
- Alternatives to Fixed Effects Analyses
2Questions
- What is the difference between fixed- and
random-effects in terms of treatments? - How are F tests with random effects different
than with fixed effects? - Describe a concrete example of a randomized block
design. You should have 1 factor as the blocking
factor and one other factor as the factor of main
interest.
3Questions (2)
- How is a repeated measures design different from
a totally between subjects design in the
collection of the data? - How does the significance testing change from
the totally between to a design to one in which
one or more factors are repeated measures (just
the general idea, you dont need to show actual F
ratios or computations)? - Describe one argument for using repeated measures
designs and one argument against using such
designs (or describe when you would and would not
want to use repeated measures).
4Fixed Effects Designs
- All treatment conditions of interest are included
in the study - All in cell get identical stimulus (treatment, IV
combination) - Interest is in specific means
- Expected mean squares are (relatively) simple F
tests are all based on common error term.
5Random Effects Designs
- Treatment conditions are sampled not all
conditions of interest are included. - Replications of the experiment would get
different treatments - Interest in the variance produced by an IV rather
than means - Expected mean squares relatively complex the
denominator for F changes depending on the effect
being tested.
6Fixed vs. Random
Random Random Fixed Fixed
Examples Conditions Conditions Examples
Persuasiveness of commercials Treatment Sampled All of interest Sex of participant
Experimenter effect Replication different Replication same Drug dosage
Impact of team members Variance due to IV Means due to IV Training program effectiveness
7Single Factor Random
- The expected mean squares and F-test for the
single random factor are the same as those for
the single factor fixed-effects design.
8Experimenter effects (Hays Table 13.4.1)
1 2 3 4 5
5.8 6.0 6.3 6.4 5.7
5.1 6.1 5.5 6.4 5.9
5.7 6.6 5.7 6.5 6.5
5.9 6.5 6.0 6.1 6.3
5.6 5.9 6.1 6.6 6.2
5.4 5.9 6.2 5.9 6.4
5.3 6.4 5.8 6.7 6.0
5.2 6.3 5.6 6.0 6.3
5.5 6.21 5.9 6.33 6.16
9 Sum of
Source DF
Squares Mean Square F Value Pr gt F
Model 4 3.48150000
0.87037500 10.72 lt.0001
Error 35 2.84250000
0.08121429 Corrected Total 39
6.32400000
10Random Effects Significance Tests (A B
random/within)
Source E(MS) F df
A J-1, (J-1)(K-1)
B K-1, (J-1)(K-1)
AxB (J-1)(K-1), JK(n-1)
Error
11Why the Funky MS?
- Treatment effects for A, B, AxB are the same
for fixed random in the population of
treatments. - In fixed, we have the population, in random, we
just have a sample. - Therefore, in a given (random) study, the
interaction effects need not sum to zero. - The AxB effects appear in the main effects.
12Applications of Random Effects
- Reliability and Generalizability
- How many judges do I need to get a reliability of
.8? - How well does this score generalize to a
particular universe of scores? - Intraclass correlations (ICCs)
- Estimated variance components
- Meta-analysis
- Control (Randomized Blocks and Repeated Measures)
13Review
- What is the difference between fixed- and
random-effects in terms of treatments? - How are F tests with random effects different
than with fixed effects?
14Randomized Blocks Designs
- A block is a matched group of participants who
are similar or identical on a nuisance variable - Suppose we want to study effect of a workbook on
scores on a test in research methods. A major
source of nuisance variance is cognitive ability - We can block students on cognitive ability.
15Randomized Blocks (2)
- Say 3 blocks (slow, average, fast learners)
- Within each block, randomly assign to workbook or
control. - Resulting design looks like ordinary factorial
(3x2), but people are not assigned to blocks. - The block factor is sampled, i.e., random. The F
test for workbook is more powerful because we
subtract nuisance variance. - Unless blocks are truly categorical, a better
design is analysis of covariance, described after
we introduce regression.
16Randomized Blocks (3)
Source E(MS) F df
A workbook (fixed) J-1, (J-1)(K-1)
B learner (random) If desired, use MSe
AxB (J-1)(K-1), JK(n-1)
Error Look up designs
17Review
- Describe a concrete example of a randomized block
design. You should have 1 factor as the blocking
factor and one other factor as the factor of main
interest. - Describe a study in which Depression is a
blocking factor.
18Repeated Measures Designs
- In a repeated measures design, participants
appear in more than one cell. - Painfree study
- Sports instruction
- Commonly used in psychology
19Pros Cons of RM
Pro Con
Individuals serve as own control improved power Carry over effects
May be cheaper to run Participant sees design - demand characteristics
Scarce participants
20RM Participant Factor
Source df MS E(MS) F
Between Subjects K-1 No test
Within Subjects
Treatments J-1
Subjects x Treatments (J-1)(K-1) No test
Total JK-1
21Drugs on Reaction Time
Order of drug random. All Ss, all drugs.
Interest is drug.
Person Drug 1 Drug 2 Drug 3 Drug 4 Mean
1 30 28 16 34 27
2 14 18 10 22 16
3 24 20 18 30 23
4 38 34 20 44 34
5 26 28 14 30 24.5
Mean 26.4 25.6 15.6 32 24.9
Drug is fixed person is random. 1 Factor
repeated measures design. Notice 1 person per
cell. We can get 3 SS row, column, and residual
(interaction plus error).
22Total SS
Person Drug 1 Drug 2 Drug 3 Drug 4 Mean
1 30 28 16 34 27
2 14 18 10 22 16
3 24 20 18 30 23
4 38 34 20 44 34
5 26 28 14 30 24.5
Mean 26.4 25.6 15.6 32 24.9
23Drug SS
Person Drug M DD Person Drug M DD
1 1 26.4 2.25 1 3 15.6 86.49
2 1 26.4 2.25 2 3 15.6 86.49
3 1 26.4 2.25 3 3 15.6 86.49
4 1 26.4 2.25 4 3 15.6 86.49
5 1 26.4 2.25 5 3 15.6 86.49
1 2 25.6 0.49 1 4 32 50.41
2 2 25.6 0.49 2 4 32 50.41
3 2 25.6 0.49 3 4 32 50.41
4 2 25.6 0.49 4 4 32 50.41
5 2 25.6 0.49 5 4 32 50.41
Total 698.20
24Person SS
Person Drug M DD Person Drug M DD
1 1 27 4.41 1 3 27 4.41
2 1 16 79.21 2 3 16 79.21
3 1 23 3.61 3 3 23 3.61
4 1 34 82.81 4 3 34 82.81
5 1 24.5 0.16 5 3 24.5 0.16
1 2 27 4.41 1 4 27 4.41
2 2 16 79.21 2 4 16 79.21
3 2 23 3.61 3 4 23 3.61
4 2 34 82.81 4 4 34 82.81
5 2 24.5 0.16 5 4 24.5 0.16
Total 680.8
25Summary
Total 1491.8 Drugs 698.2, People680.8.
Residual Total (DrugsPeople)
1491.8-(698.2680.8) 112.8
Source SS df MS F
Between People 680.8 4 Nuisance variance
Within people (by summing) 811.0 15
Drugs 698.2 3 232.73 24.76
Residual 112.8 12 9.40
Total 1491.8 19 Fcrit(.05) 3.95
26SAS
Run the same problem using SAS.
272 Factor, 1 Repeated
Subject B1 B2 B3 B4 M
1 0 0 5 3 2
A1 2 3 1 5 4 3.25
3 4 3 6 2 3.75
4 4 2 7 8 5.25
A2 5 5 4 6 6 5.25
6 7 5 8 9 5.75
M 3.83 2.5 6.17 3.33 4.56
DVerrors in control setting dials IV(A) is dial
calibration - between IV(B) is dial shape -
within. Observation is randomized over dial shape.
28data d1 input i1-i4 cards 0 0 5 3 3 1 5 4 4 3
6 2 4 2 7 8 5 4 6 6 7 5 8 9 data d2 set
d1 array z i1-i4 do over z if _N_ le 3 then a
1 if _N_ gt 3 then a 2 sub _N_ b
_I_ yz output end proc print proc
glm class a b sub model y a b ab sub(a)
subb test ha esub(a)/htype1 etype1 test
hb ab esubb/htype1 etype1 run
29Summary
Note that different factors are tested with
different error terms.
Source SS df MS F
Between people 68.21 5
A(calibration) 51.04 1 51.04 11.9
Subjects within groups 17.17 4 4.29
Within people 69.75 18
B (dial shape) 47.46 3 15.82 12.76
AB 7.46 3 2.49 2.01
BxSub within group 14.83 12 1.24
30SAS Post Hoc Tests
Run the same problem using SAS. The SAS default
is to use what ever is residual as denominator of
F test. You can use this to your advantage or
else over-ride it to produce specific F tests of
your desire. If you use the default error term,
be sure you know what it is.
Post hoc tests with repeated measures are tricky.
You have to use the proper error term for each
test. The error term changes depending on what
you are testing. Be sure to look up the right
error term.
31Assumptions of RM
Orthogonal ANOVA assumes homogeneity of error
variance within cells. IVs are independent.
With repeated measures, we introduce covariance
(correlation) across cells. For example, the
correlation of scores across subjects 1-3 for the
first two calibrations is .89. Repeated measures
designs make assumptions about the homogeneity of
covariance matrices across conditions for the F
test to work properly. If the assumptions are
not met, you have problems and may need to make
adjustments. You can avoid these assumptions by
using multivariate techniques (MANOVA) to analyze
your data. I suggest you do so. If you use
ANOVA, you need to look up your design to get the
right F tests and check on the assumptions.
32Review
- How is a repeated measures design different from
a totally between subjects design in the
collection of the data? - How does the significance testing change from
the totally between to a design to one in which
one or more factors are repeated measures (just
the general idea, you dont need to show actual F
ratios or computations)? - Describe one argument for using repeated measures
designs and one argument against using such
designs (or describe when you would and would not
want to use repeated measures).