Measurement in Exercise and Sport Psychology Research - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Measurement in Exercise and Sport Psychology Research

Description:

3) Construct validity evaluated by investigating what qualities a scale measures. All evidence for and against a measure is actually construct validity (Rogers, ... – PowerPoint PPT presentation

Number of Views:375
Avg rating:3.0/5.0
Slides: 20
Provided by: ryanr9
Category:

less

Transcript and Presenter's Notes

Title: Measurement in Exercise and Sport Psychology Research


1
Measurement in Exercise and Sport Psychology
Research
  • EPHE 348

2
Measurement
  • We measure performance on a variable an
    attribute that different people possess in
    different amounts
  • Measurement is not specifically defined but has
    levels and properties

3
Levels of Measurement(from least rigorous to
most rigorous)
  • Nominal numbers are used to classify (gender,
    eye color)
  • Ordinal numbers have an order (ranking in
    finishing a race)
  • Interval equal differences in numbers imply
    equal differences in attributes (calendar time)
  • Ratio interval with a zero point reflecting the
    absence of the attribute (height)

4
Basic Measurement Theory
  • An observed score on a measure is made up of two
    components
  • 1) true score
  • 2) error
  • Error may have a systematic component and a
    random component
  • examples of systematic error social
    desirability, gender bias, cultural bias etc.

5
Thinking more about Error...
  • In measurement theory we assume
  • 1) Error is random across items
  • 2) Error is independent across items
  • 3) Error has a normal distribution with a mean of
    0 (cancels itself out)

6
Key Point in Scale Construction
  • Multiple items generally are needed in a measure
    to approximate the true score and reduce random
    error towards 0

7
Most common Types of Reliability Assessment
  • 1) Internal consistency of a scale
  • Purpose to try to identify the commonality of
    variability in the scale items and interpret it
    as true score variance
  • Examples
  • split half
  • Cronbachs ?

8
Most common Types of Reliability Assessment
  • 2) Comparison between time and testers
  • Purpose To identify consistency across time, and
    across testers
  • Examples
  • test-retest reliability
  • inter-rater reliability

9
Factors that Influence Reliability
  • 1) Heterogeneity of the construct
  • 2) Method of estimation
  • 3) Number of items
  • 4) Variability in the participants that answer
    the measure

10
Validity
  • Are we even measuring what we want to measure?
  • If you do not know what your measurement means,
    you do not know anything (Gulliksen, 1950)
  • Validity is a matter of degree and we must gather
    multiple lines of evidence

11
The Holy Trinity of Validity
  • 1) Content validity based on professional
    judgements about the relevance of the item
    content to the content of a particular domain of
    interest and about the representativeness with
    which the item covers that domain
  • 2) Criterion validity based on the degree of
    empirical relationships, usually correlations
    between the scale and criterions

12
The Holy Trinity of Validity
  • 3) Construct validity evaluated by
    investigating what qualities a scale measures
  • All evidence for and against a measure is
    actually construct validity (Rogers, 1994)

13
Threats to Construct Validity
  • 1) Construct-irrelevant variance
  • including something that should not have been
    included
  • 2) Construct under representation
  • leaving out something that should be included
    according to the theory surrounding the construct
    of interest

14
Constructing A Measure
  • Phase One Item Construction
  • strive for representativeness and relevance to
    the domain of interest
  • items should be clear, short and simple
  • items should not have double meaning
    (conjunctions)
  • avoid items that are endorsed by no one or
    everyone

15
Constructing a Measure
  • items should be balanced in positive and negative
    wording
  • arrange items in random order
  • General problems
  • Social desirability tendency to always answer
    favorably
  • Acquiescence - tendency to agree

16
Constructing a Measure
  • Phase 2 Judges Analysis
  • Items are sent to expert judges in the domain and
    evaluated for relevance and representativness
  • The more judges the better
  • Evaluation is rated objectively and with room for
    comments

17
Constructing a Measure
  • Phase 3 Protocol Analysis
  • Items are examined by a sample of participants in
    a think out loud procedure and focus group
  • Helps identify any differences in meaning between
    the experts and the population sample

18
Constructing a Measure
  • Phase 3 Structure Analysis
  • The measure is administered to a population with
    very similar and very different measures
  • Can then examine
  • 1) structure of the measure
  • 2) Divergence from the very different measures
  • 3) Convergence with the similar measure

19
Exercise
  • In a group of five, develop a five item measure
  • Consider
  • Representativeness relevance
  • Phrase simplicity variance
  • Type of scaling
Write a Comment
User Comments (0)
About PowerShow.com