Reliability, Power and Difference Scores - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

Reliability, Power and Difference Scores

Description:

Recall that power defines the probability of finding a significant effect if ... Var(x) = var(T) ... estimate power as a function of the reliability of the ... – PowerPoint PPT presentation

Number of Views:67
Avg rating:3.0/5.0
Slides: 23
Provided by: cur67
Category:

less

Transcript and Presenter's Notes

Title: Reliability, Power and Difference Scores


1
Reliability, Power and Difference Scores
  • Kopriva Shaw, 1991
  • Peter, Churchill, Brown, 1993

2
  • All other thing being constant, as the
    reliability of a measure decreases so too does
    the statistical power.
  • Recall that power defines the probability of
    finding a significant effect if an effect exists.

3
Test Theory
  • Every observed score is composed of two elements
  • Observed score True ability Random Error
  • or
  • x T e

4
Test Theory (cont.)
  • The variability in observed scores represents the
    additive combination of variability in ability
    (true score) and error.
  • Var(x) var(T) var(e)

5
Reliability
  • This representation of observed scores allows us
    to determine the reliability of a measure.
  • Reliability reflects the consistency of a measure
    on two or more occasions.
  • X1 X2

6
Reliability (cont.)
  • Measures of the same construct at time 1 and time
    2 should yield similar results. The true scores
    should be related but the error should not.
  • X1 X2
  • T e1 T e2

7
Reliability (cont.)
  • Reliability can be determined by calculating the
    ratio
  • Variance of the true scores
  • Variance of the measure

8
Reliability (cont.)
  • Unfortunately, we cannot compute the variance of
    the true scores.
  • We can estimate the ratio in a variety of ways
  • Test-retest
  • Internal consistent (Cronbachs a)

9
Kopriva Shaw
  • Approach the true variance problem using ANOVA.
  • They estimate power as a function of the
    reliability of the measure, alpha, effect size,
    and sample size.

10
Table 1. Power estimates for 2 groups
11
Table 2 µ2 µ1 µ3 u2
12
(No Transcript)
13
Conclusions
  • Reliability has the greatest effects when
  • Reliabilities vary from .2 to .7
  • Samples sizes under 100
  • What are the implications for human factors?

14
Threats to Reliability
  • Difference scores threaten the reliability of
    dependent measures.
  • Difference scores take a variety of forms
  • Post Score Pre Score
  • Individual1 Individual2
  • Construct1 Construct2

15
Reliability of the Difference
Variance of measure 2
Reliability of measure 1
Correlation of 1 2
Reliability of measure 2
Variance of measure 1
Standard deviation of 1 2
16
Ex 1. Pre Post Experiment Mean r
.8 r12.7 Ex 2. Gap Analysis Mean r .85 r12.5
17
Other Related Problems
  • Illusions of discriminant validity (dv)
  • dv is inferred when two measures of different
    constructs do not correlate highly.
  • Convergent validity is inferred when two
    different measures of the same construct
    correlate highly.

18
Discriminant Validity
  • A measure cannot have a correlation with another
    measure higher than its reliability coefficient.
  • If the reliability of a difference score is .2,
    the difference score cannot have a correlation
    with another measure greater than .2

19
Discriminant Validity (cont.)
  • A low correlation of .2 might lead a researcher
    to infer discriminant validity exists when in
    fact the low correlation is caused by the low
    reliability alone.

20
Other Problems (cont.)
  • Spurious correlations
  • Because a difference score is related to its
    components, the correlation it has with other
    measures is an artifact.
  • The difference score provides no additional
    information in a relationship than the original
    components.

21
Other Problems (cont.)
  • Variance Restriction
  • Occurs when ratings on one measure are
    consistently higher than the other.
  • When components are close to one another, there
    will be little variability in difference scores.
  • When components differ, there will be greater
    variability in difference scores.

22
Solutions
  • Avoid difference scores by
  • Using direct comparisons.
  • Reframing research questions.
Write a Comment
User Comments (0)
About PowerShow.com