DESIGNING VALID COMMUNICATION RESEARCH - PowerPoint PPT Presentation

1 / 45
About This Presentation
Title:

DESIGNING VALID COMMUNICATION RESEARCH

Description:

a scale can be reliable, but not valid ... occurs when the researcher accidentally informs the participants of what he or she expects ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 46
Provided by: chengji
Category:

less

Transcript and Presenter's Notes

Title: DESIGNING VALID COMMUNICATION RESEARCH


1
DESIGNING VALID COMMUNICATION RESEARCH
  • Issues of Validity Reliability

2
Agenda
  • reliability vs. validity
  • measurement reliability
  • measurement validity
  • internal validity
  • external validity

3
Reliability vs. Validity
  • reliability consistency stability
  • validity accuracy

4
Measurement reliability
  • whether the scale used is measuring something in
    a consistent and stable manner
  • eg. scale (weighing machine)

5
Measurement validity
  • whether we are measuring what we intend to measure

6
Word of caution!!!
  • a scale can be reliable, but not valid
  • if we have a valid scale, we can say that we also
    have a reliable scale
  • reliability is the prerequisite of validity

7
Techniques to determine measurement reliability
  • Multiple-Administration Techniques
  • - Test-retest
  • - Alternative procedure method
  • Single-Administration Techniques
  • - Split-half
  • - Cronbachs Alpha (?)

8
1) Test-retest
  • administering the same survey to the same
    people/respondents at two different times
  • a measure of how reproducible a set of results is
  • problem with practice effect

9
Alternate form
  • using differently worded items to measure the
    same attributes
  • either manipulate question, or response set
  • useful when doing test-retest
  • two strategies response set, or the question

10
When manipulating the response set !!!
  • change the order of the response set, or wordings

11
When manipulating the question !!!
  • equivalent item rewording
  • nonequivalent item rewording should never be
    attempted

12
Single-Administration Techniques
  • Internal Consistency
  • - Split-half
  • - Cronbachs Alpha (?)

13
Split-half
  • separating peoples answers on an instrument into
    two parts
  • eg., 10-item scale (1-5 6-10 or even vs. odd)

14
  • Cronbachs alpha (?)
  • ? 0.80 higher (good)
  • ? 0.50-0.79 (still okay)
  • ? below 0.5 (big NO, NO)

15
Boosting Reliability
  • Drop problematic items

16
5) Interrater/intercoder/interobserver
Reliability
  • applies only when you rate, or code something, or
    when you observe behaviors
  • provides a measure of how well two, or more
    evaluators agree in their assessments of a
    variable

17
Techniques to determine measurement validity
  • 1) content validity
  • face validity and panel approach
  • 2) criterion validity (2 types)
  • concurrent validity
  • predictive validity
  • 3) construct validity

18
1) Face validity
  • cursory review of items by untrained judges
  • eg., your siblings, your bf/gf, or tennis
    partner, etc.

19
2) Panel Approach
  • subjective measure of how appropriate the items
    are to a set of reviewers who have some knowledge
    of the subject matter

20
3) Criterion validity
  • established when a scale is shown to relate to
    another scale already known to be valid
  • measures how well ones instrument stacks up
    against another instrument, or predictor

21
a) Concurrent validity
  • established when a scale in question is judged
    against some other method that is acknowledged as
    a gold standard for assessing the same variable

22
b) Predictive validity
  • the ability of a scale/instrument to forecast
    future events, behaviors, attitudes, or outcomes

23
4) Construct validity
  • most valuable, but difficult to obtain
  • determined only after years of experience with
    the scale

24
Internal Validity
  • concerned with the accuracy of the conclusions
    drawn from a particular research study

25
  • whether your research study is designed and
    conducted such that it leads to accurate findings
    about the phenomena under investigation for the
    particular group of people or texts studied.
    adequately test your hypothesis

26
  • threat comes primarily from the following
    factors
  • 1) problems with the study
  • 2) problems with the research participants
  • 3) problems with the researcher

27
1) Problems with study
  • a) measurement reliability measurement validity
  • b) poor manipulation of treatment (in experiments
    only)
  • c) environmental factors -- i) history, ii)
    testing sensitization, iii) confounding
    variables

28
i) History
  • occurs when changes in the environment external
    to a study influence peoples behavior within the
    study
  • events that occur while the study is in progress
    influence peoples behavior within the study

29
ii) Testing sensitization
  • occurs when an initial measurement in a research
    study influences the subsequent measurement
  • pretest affects posttest

30
iii) Confounding variables
  • accidental manipulation of other variables
  • the influence of variables not controlled by the
    researcher

31
2) Problems with participants
  • a) Hawthorne effect
  • b) self selection
  • c) maturation
  • d) attrition/mortality
  • e) interparticipant bias

32
a) Hawthorne effect
  • occurs when the research participants know what
    you are studying, so they give you answers that
    will support your hypothesis

33
b) self selection bias
  • occurs when comparisons are made between groups
    of people that have been formed on the basis of
    self-selection

34
c) maturation
  • internal changes that occur within people over
    the course of a study that explain their
    behaviors
  • include any physiological, psychological,
    emotional changes

35
d) attrition/mortality
  • people dropping out from the study

36
e) interparticipant bias
  • participants influence one another

37
3) Problems with the researcher
  • a) personal attribute effect
  • b) unintentional expectancy effect
  • c) observational bias -- observer drift
    observer bias

38
a) personal attribute effect
  • occurs when the characteristics of the researcher
    influence the behaviors of the participants

39
b) unintentional expectancy effect
  • occurs when the researcher accidentally informs
    the participants of what he or she expects

40
c) observational bias
  • occurs when the observers commit inaccuracies in
    their observations
  • drift -- being inconsistent
  • bias -- knowledge of study influences their
    observations
  • halo effect -- faulty judgments about the
    research participant

41
External Validity
  • concerns with the generalizability of the
    findings from a research study
  • threats come primarily from the sample used in
    the study
  • key word sample

42
Random sampling methods
  • 1) simple random sample
  • 2) systematic sample
  • 3) stratified sample
  • 4) cluster sample

43
Non-random sampling methods
  • 5) convenience
  • 6) volunteer
  • 7) purposive
  • 8) quota
  • 9) network

44
For more information
  • http//trochim.human.cornell.edu/tutorial/TUTORIAL
    .HTM
  • http//trochim.human.cornell.edu/kb/measval.htm

45
Have a good day!
Write a Comment
User Comments (0)
About PowerShow.com