1 What Is Measurement - PowerPoint PPT Presentation

1 / 42
About This Presentation
Title:

1 What Is Measurement

Description:

items that measure more than one substantive factor. Structural Equation Modeling(SEM) ... Convenience sampling. pilot testing. reliability. internal ... – PowerPoint PPT presentation

Number of Views:17
Avg rating:3.0/5.0
Slides: 43
Provided by: Ivy396
Category:

less

Transcript and Presenter's Notes

Title: 1 What Is Measurement


1
1What Is Measurement?
  • ???
  • ???

2
(No Transcript)
3
  • Reliability

Internal Consistency Reliability Test-Retest
Reliability
4
  • ReliabilityInternal consistency

Internal consistency procedures assess whether a
set of items fits together or belongs together.
measures typically are assessed for internal
consistency reliability, and items are deleted or
modified
5
  • ReliabilityInternal consistency
  • Intercorrelation
  • Intercorrelations among items and correlations
    between each item and the total score(Exhibit
    1.2)
  • Mean
  • Item-to-total correlation
  • Item-to-total statistics report the correlation
    between an item and the total score
  • Cronbach ?
  • A goal in internal consistency procedures is to
    maximize coefficient alpha, or the proportion of
    variance attributable to common sources

6
(No Transcript)
7
(No Transcript)
8
(No Transcript)
9
  • ReliabilityInternal consistency
  • How does coefficient alpha measure reliability
    (where reliability is the minimization of random
    error)?
  • Coefficient alpha is the proportion of variance
    attributable to common sources
  • These common sources are presumably he construct
    in question
  • Anything not attributable to common sources is
    assumed to be random error

10
  • ReliabilityInternal consistency

As the number of items in a scale increases,
coefficient alpha increases. (Exhibit 1.2 reports
alphas for 5-, 10-, and 20-item versions.) Why is
this happening? In fact, items with moderate
correlations with the total may be retained if
they capture some unique aspect of the domain not
captured by other items. This issue highlights
the importance of examining items both
conceptually and empirically in conjunction.
11
  • ReliabilityTest-Retest Reliability

the same scale is administered twice with an
interval of a few weeks The logic of test-retest
reliability is simply that individuals who score
higher (or lower) in one administration should
score higher (or lower) in the second, or vice
versa.
12
(No Transcript)
13
  • Dimensionality

Exploratory Factor Analysis(EFA) Confirmatory
Factor Analysis(CFA) and Structural Equation
Modeling(SEM)
14
  • Dimensionality

Reliability through internal consistency
assesses whether a set of items is tapping into a
common core as measured through the degree to
which they covary with each other. But whether
the common core consists of a specific set of
dimensions is in the realm of dimensionality,
assessed through factor analysis.
15
  • Factor analysis is an approach in which variables
    are reduced to combinations of variables, or
    factors
  • A correlation matrix for all items
  • if there are two distinct factors(Fig1.12)
  • variance for each item can be divided into error
    variance, specific variance, and common variance.
  • PCA uses all the variance available
  • FA uses common variance, and the communality of
    each item is estimated

16
  • Factor analysis
  • The results of factor analysis include
  • the number of factors extracted
  • the variance explained by each factor
  • Correlations or loadings between individual
  • Unrotated factor matrices are rotated to improve
    interpretation of the loadings of items on
    factors.
  • orthogonal or oblique (Fig1.10)
  • Varimax rotation

17
(No Transcript)
18
(No Transcript)
19
(No Transcript)
20
  • Confirmatory Factor Analysis

Exploratory factor analysis provides initial
evidence of dimensionality but confirmatory
factor analysis is required for conclusive
evidence.
21
(No Transcript)
22
(No Transcript)
23
  • Confirmatory Factor Analysis
  • external consistency is assessed by comparing
    the observed correlation between two items of
    different dimensions or constructs with the
    predicted correlation that arises out of the
    hypothesized relationship between items and
    measures.
  • external consistency is assessed
  • overall fit indexes
  • residuals between items

24
  • Confirmatory Factor Analysis
  • In a multidimensional measure
  • Each item
  • Two items
  • items that measure more than one substantive
    factor

25
  • Structural Equation Modeling(SEM)
  • combines econometric and psychometric approaches
    by simultaneously assessing structural and
    measurement models
  • structural model
  • measurement model (Fig 1-15)
  • accounting for measurement error
  • At the measurement level
  • At the theoretical level (Fig 1-16)

26
(No Transcript)
27
(No Transcript)
28
  • Structural Equation Modeling(SEM)
  • Issues in SEM
  • identification
  • procedures used to estimate the model
  • statistical tests and overall fit indexes
  • sample size
  • In comparing SEM versus the traditional approach
    to measurement
  • Model could be modified
  • plausible reasoning should precede modeling
  • hierarchical factor analyses can also be
    performed using SEM (Fig 1-17)

29
(No Transcript)
30
  • Validity
  • Whether a measure captures the intended
    construct, or whether the core tapped by a
    measure is the intended core, is the purview of
    validity.
  • The distinction between reliability and
    validity,
  • an exercise of throwing stones
  • a reliable friend who is late but consistently
    late by 15 minutes

31
  • Validity
  • Several types of validity
  • Content validity
  • Face validity
  • Known-groups validity
  • Predictive validity
  • Convergent validity
  • Discriminant validity
  • Nomological validity
  • Construct validity

32
(No Transcript)
33
(No Transcript)
34
  • First Phase Literature Review and Exploratory
    Interviews

The aim of this phase is to develop
comprehensive grounding before developing the
measure and conceptualizing the construct.
35
  • Second Phase Conceptualization and Domain
    Delineation
  • This step involves explicating
  • what the construct is and what it is not
  • where the proposed dimensionality of the
    construct is described explicitly
  • A matrix of basic reading, writing, and
    mathematical skills versus consumer tasks should
    be created to provide a complete delineation.

36
  • Third Phase Measure Construction and Content
    Validity Assessment
  • the measure should be constructed through item
    generation.
  • a large pool of items
  • redundancy
  • Several such procedures should be employed, such
    as asking experts to
  • content validity of a measure

37
  • Fourth Phase Reliability and Validity Assessment
  • empirical studies should be conducted to assess
    the reliability, dimensionality, and validity of
    the consumer literacy measure
  • Convenience sampling
  • pilot testing
  • reliability
  • internal consistency reliability
  • test-retest reliability
  • dimensionality
  • EFA
  • CFA
  • Item-response theory can be used to assess
    measurement accuracy at different levels of
    ability

38
  • Fourth Phase Reliability and Validity Assessment
  • Validity
  • take measure development to the realm of
    crossconstruct relationships
  • content validity
  • Convergent validity
  • known-groups validity
  • Predictive validity
  • Nomological validity
  • Discriminant validity
  • construct validity

39
  • The multitrait-multimethod approach
    systematically examines the effect of using
    identical versus different methods

40
  • General Issues in Measurement

The strength and relevance of constructs may
vary across people. do not purport to assess an
underlying abstract construct A few general
points are noteworthy with regard to psychometrics
41
  • Summary
  • Accurate measurement is central to scientific
    research.
  • There are two basic types of measurement error
    in all of scientific research
  • random error
  • systematic error
  • The measure development process
  • Items will be modified after some tests
  • Measure development should ideally combine
    empirical assessment with conceptual examination.
  • Low reliability and low validity have
    consequences.
  • A metaphor that can be used to understand
    measurement is to consider the universe and the
    location of specific planets or stars
  • Paths----measures
  • Planets----construct

42
END
Write a Comment
User Comments (0)
About PowerShow.com