A quick introduction to - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

A quick introduction to

Description:

A frequency distribution shows all the possible scores in a distribution and how ... This rule is known as the scree test, and it is more accurate than the ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 22
Provided by: johnteri
Category:

less

Transcript and Presenter's Notes

Title: A quick introduction to


1
A quick introduction to the analysis of
questionnaire data John Richardson
2
Measurement scales nominal categorisation
ordinal rank ordering gt, , lt interval
equal intervals gt, lt, , ratio absolute
zero gt, lt, , , ?, ?
3
Frequency distributions A frequency distribution
shows all the possible scores in a distribution
and how often each score was obtained. A bar
graph shows the frequency distribution of a set
of scores where the scores are arranged on the x
axis and their frequencies are shown on the y
axis. A histogram shows the frequency of a set
of scores measured on an interval or ratio scale.
The bars correspond to successive intervals on
the scale. (A histogram is a bar graph in which
the bars are touching each other.)
4
Measures of central tendency The (arithmetic)
mean the sum of the scores in a distribution
divided by the number of scores (? X). The
median the point on the scale below which 50 of
the scores in a distribution fall. If the number
of scores is odd, the median is the middle score
when the scores are ranked. If the number of
scores is even, the median is the average of the
two middle scores when the scores are
ranked. The mode the most frequent score in a
distribution.
5
Measures of central tendency, ctd. The mean
assumes an interval or ratio scale. The median
assumes an ordinal, interval or ratio scale. The
mode assumes a nominal, ordinal, interval or
ratio scale.
6
Measures of variability The range is the
difference between the highest and lowest scores
in a distribution. A deviation score is the
difference between the original score and the
mean of the entire distribution. The variance of
a set of scores is the average squared deviation
score. The standard deviation of a set of
scores is the square root of the variance.
7
Correlation A linear relationship between two
variables is one that can be most accurately
represented by a straight line. A perfect
relationship is one in which all of the points
fall on the line. An imperfect relationship is
one where a relationship exists but all of the
points do not fall on the line. A positive
relationship exists when there is a direct
relationship between the two variables. A
negative relationship exists when there is an
inverse relationship between the two variables.
8
Correlation, ctd. A correlation coefficient
expresses quantitatively the magnitude and
direction of a relationship 1 a perfect
positive relationship ? ? an imperfect positive
relationship ? 0 no relationship ? ? an
imperfect negative relationship ? 1 a perfect
negative relationship
9
Correlation, ctd. The linear correlation
coefficient Pearson r is a measure of the extent
to which pairs of scores occupy the same (or
opposite) positions within their respective
distributions. The square of Pearson r
quantifies the proportion of the total
variability in one of the variables that is
accounted for by the other variable. If r
0.80, r² 0.64, so y explains 64 of the
variability in x.
10
Correlation, ctd. Pearson r assumes that the
data are measured on an interval or ratio scale.
For ordinal scales, use the Spearman rank order
correlation coefficient rho (rs). For nominal
scales, use the phi (f) coefficient. Finally,
note that correlation does not imply causation.
11
Reliability A research instrument is reliable if
it yields consistent results when used repeatedly
under the same conditions with the same
participants (that is, it is relatively
unaffected by errors of measurement). It can be
measured by various coefficients of reliability,
all of which vary between zero (reflecting total
unreliability) and one (reflecting perfect
reliability). (In practice, instruments of poor
reliability may actually yield estimates that are
less than zero.)
12
Reliability, ctd. Test-retest reliability is
obtained by calculating the correlation
coefficients between the scores obtained by the
same individuals on successive administrations of
the same instrument. If the interval is too
short, the participants will become familiar with
the instrument and may even recall the responses
that they gave at the first administration. If
the interval is too long, there may be genuine
changes in the personal qualities being measured.
In any case, longitudinal studies are hard to
carry out because of drop-out between the two
administrations.
13
Reliability, ctd. An alternative approach is to
estimate an instruments reliability by examining
the consistency among the scores obtained on its
constituent parts at a single administration.
One such measure is split-half reliability the
items are divided into two subsets, and a
correlation coefficient is calculated between the
scores obtained on the two halves.
14
Reliability, ctd. The most common measure of
reliability is Cronbachs coefficient alpha. This
estimates the internal consistency of an
instrument by comparing the variance of the total
scores with the variance on the scores on the
individual items. (It is formally equivalent to
the average value of split-half reliability
across all the possible ways of dividing the
items into two distinct subsets.)
15
Factor analysis Factor analysis is a technique
for identifying a small number of underlying
dimensions from a large number of variables
measured on the same participants. Principal
component analysis assigns the variance
associated with the original variables to the
same number of independent dimensions or
components. It is based on the original
correlation matrix among the variables. However,
whereas the diagonal elements of this matrix have
a value of 1.00 by definition, the off-diagonal
elements are reduced by test-retest
unreliability.
16
Factor analysis, ctd. The various forms of
common factor analysis are only concerned with
the variance that is common to two or more of the
variables. They use an amended correlation
matrix in which the diagonal elements are
replaced by estimates of the communality of the
corresponding variables, and so they acknowledge
that the other elements in the matrix are reduced
by test-retest unreliability. The most commonly
used form of common factor analysis is called
principal axis factoring in SPSS.
17
Factor analysis, ctd. The next problem is to
determine the number of factors or component to
be extracted. The eigenvalues express the
proportion of variance accounted for by each
factor. One commonly used rule of thumb is that
of extracting the number of factors whose
eigenvalues are greater than one in a principal
component analysis. This is often inaccurate
when tested on artificially generated data. With
large numbers of variables, the eigenvalues-one
rule tends to overestimate the true number of
factors.
18
Factor analysis, ctd. An alternative procedure
works by extracting factors up to the point where
the difference between the successive eigenvalues
reflects a relatively constant increment
attributable to random error. This rule is
known as the scree test, and it is more accurate
than the eigenvalues-one rule when used with
artificially generated sample data. In general,
at least two different criteria should be used to
justify extracting a particular number of
factors.
19
Factor analysis, ctd. The extracted factors are
then usually rotated to yield a more
interpretable solution. Rotation tries to
maximise the number of variables that show high
or low correlations (or loadings) with each
factor and to minimise the number of variables
with moderate loadings. Orthogonal rotation
results in factors that are independent of one
another. This may make them easier to interpret.
20
Factor analysis, ctd. Oblique rotation results
in factors that may be correlated with one
another. This may be more plausible if the
various dimensions result from overlapping sets
of mental processes. If a factor analysis
results in a number of oblique factors, then one
can calculate the participants scores on those
factors and subject them to a further
(second-order) factor analysis.
21
A quick introduction to the analysis of
questionnaire data John Richardson
Write a Comment
User Comments (0)
About PowerShow.com