Basic experimental design, control, and context - PowerPoint PPT Presentation

1 / 42
About This Presentation
Title:

Basic experimental design, control, and context

Description:

Basic experimental design, control, and context – PowerPoint PPT presentation

Number of Views:50
Avg rating:3.0/5.0
Slides: 43
Provided by: davidjm1
Category:

less

Transcript and Presenter's Notes

Title: Basic experimental design, control, and context


1
Psychology 242, Dr. McKirnan
Right click for full Screen or end show.
Left click to proceed,
1/17/09
  • Basic experimental design, control, and context
  • Overall research strategy
  • Validity
  • Internal Validity
  • External Validity
  • Ecological Validity
  • Researcher biases
  • Participant biases
  • Social cultural context bias

2
Research flow What is the core research question?
  • What is being studied? Why?
  • What is the contrast space?
  • What is compared to what?
  • What needs explaining / what is given
  • What is known about the core hypothetical
    constructs?
  • How do you propose they relate to each other?
  • How will the study expand or clarify theory?
  • Use existing theory to explain a new phenomenon
    (Divergent use of theory)?
  • Test contrasting theories of one phenomenon
    (Convergent use of theory)?
  • New / expanded theory?

3
Research flow
  • What key variables best represent the constructs?
  • What is your prediction about how they are
    related?
  • Measurement design?
  • Quasi-experiment?
  • True experiment?
  • How have the variables been operationally
    defined?
  • Alternative operationalizations?
  • Implications of this operationalization?
  • Is the predictor best measured or manipulated?
  • Virtues / limitations of each approach?
  • Sampling?

4
Basic Designs Methods cont.
  • Who are your participants?
  • What is your sampling method Probability or
    Non-Probability sample?
  • Where do you recruit participants? -- what is
    your sampling frame?
  • Is you study externally valid -- does your sample
    represent the population?
  • How will your control group be formed?
  • Can you practically ethically have one?
  • Are you using existing groups?
  • Can participants self-select into a group?
  • Is random assignment or matching feasible?
  • How is the independent variable presented?
  • Simple presence v. absence?
  • Different doses?

5
Overview Basic Designs
  • Pre-experimental designs no control group

Post-Test Only Design
Pre- Post- Test Design
True (or Quasi-)experimental designs with a
control group
After only Control group design
Pre- Post- Group Comparisons
Multiple group comparison
6
Pre-experimental designs
Post-Test Only Design
Treatment
Measure
Group
Only 1 group - typically an existing group no
selection or assignment occurs.
Experimental intervention (Treatment) may or
may not be controlled by the researcher. Use
for naturally occurring or system-wide events
(e.g., group trauma, government policy change,
etc.).
Measurement may or may not be controlled by the
researcher.
Pre- Post- Test Design
Measure1
Treatment
Measure1
Group
  • Only one group
  • only group available?
  • naturally occurring intervention?

Measurements given to all participants at
baseline follow-up
All participants get the same treatment, which
may or may not be controlled by the researcher.
7
Pre-experimental Designs (2)
Advantage of Post- Pre- Post- Designs
  • Study naturally occurring intervention,
  • e.g., test scores before and after some school
    change,
  • rime rates after a policy change, etc.
  • Having both Pre- and Post measures allows us to
    examine change.
  • Disadvantage no control group many threats to
    internal validity
  • Maturation Participants may be older / wiser by
    the post-test
  • History Cultural or historical events may occur
    between pre- and post-test that change the
    participants
  • Mortality Participants may non-randomly drop out
    of the study
  • Regression to baseline Participants who are more
    extreme at baseline look less extreme over time
    as a statistical confound.
  • Reactive Measurement Participants may change
    their scores due to being measured twice, not the
    experimental manipulation.

8
Experiments
  • After only Control group design

Treatment
Measure
Group 1
Measure
Group 2
Control
  • Adds a control group. Either
  • Observed Groups
  • Naturally occurring (e.g., Class 1. v. Class 2)
    or
  • self-selected (sought therapy v. did not).
  • Assigned Groups Randomly assign participants to
    experimental v. control group, or match
    participants to create equivalent groups.

Measure Dependent Variables(s) only at
follow-up. Use experimental or standard measures
(e.g., grades, census data, crime reports).
  • Advantage Control group lessens confounds /
    threats to internal validity.
  • Random assignment decreases threats to internal
    validity.
  • Disadvantage Existing or self-selected groups
    may have confounds.
  • No baseline or pre- measure available
  • assess change?
  • ceiling (or floor) effects?
  • cannot assess equivalence of groups at baseline.

9
Basic Designs True experiments (2)
  • Pre- Post- Group Comparisons (most common study
    design)

Group 1
Measure 1
Treatment
Measure 2
Control
Group 2
Measure 1
Measure2
Two groups Observed (quasi-experiment)
or Assigned (true experiment).
Only one group receives experimental intervention.
  • Post-test follow-up of dependent variable(s)
  • Simple outcome
  • Change from baseline.

Baseline (pre-test) measure of study variables
and possible confounds.
Advantages Pre-measure assesses baseline level
of Dependent Variable -- allows researcher to
assess change -- can detect ceiling (or floor)
effects -- can use to assign participants to
groups via matching -- can assess baseline
equivalence of groups Disadvantage Highly
susceptible to confounds if using observed or
self-selected groups.
10
More Complex Experimental Designs
  • Multiple group comparison

Measure2
Treatment 1
Treatment 2
Measure2
Control
Control
  • 3 (or more) groups
  • typically formed by Random assignment.
  • 2 experimental groups, e.g.
  • low v. high dose,
  • exp. situation 1 v. 2, etc.,
  • plus the control group.
  • Compare
  • Level 1 of independent variable from Level 2
  • Either / both experimental groups from control
    grp.
  • Advantage Test dose or context effects
  • Drug doses, amounts of psychotherapy, levels of
    anxiety, etc. Increasing dose effect can be
    tested against no dose.
  • Diverse conditions to test 2nd hypotheses or
    confounds, e.g., therapy delivered by same sex v.
    opposite sex therapist.
  • Disadvantage
  • More costly and complex.
  • Potential ethical problem with a no dose (or
    very high dose) condition.

11
Overview of true experimental designs
Representative of the larger population? -
Selection  - Size of sample
  • Groups equal at baseline?
  • Existing groups or self-selection
  • v.
  • - Random assignment

Equality of procedures? - Information -
Expectancies - Quality of blinding
  • Faithfulness of treatment?
  • - Operational def.
  • - Correct dose?
  • Manipulation check

Groups really different at outcome? -
Statistical significance
Internal Validity Likelihood of chance results
External validity Random selection of sample
Internal validity Random Assignment
Internal validity Lack of confounds
External Validity Correct independent variable?
12
Basics of Design Internal Validity
Internal Validity Can we validly determine
what is causing the results of the experiment?
  • General Research Hypothesis the experimental
    outcome (values of the Dependent Variable) is
    caused only by the experiment itself (Independent
    Variable).
  • Confound a 3rd variable (unmeasured variable
    other than the Independent Variable) actually led
    to the results.
  • Core Design Issue Eliminate confounds in..
  • Assigning participants to experimental v. control
    groups
  • Procedures in each group.

13
Key threats to internal validity
  • Lack of control group
  • Non-equivalent groups

Maturation Participants may be older / wiser by
the post-test History Cultural or historical
events may occur between pre- and post-test that
change the participants Mortality Participants
may non-randomly drop out of the study Regression
to baseline Participants who are more extreme at
baseline look less extreme over time as a
statistical confound. Reactive Measurement
Participants may change their scores due to being
measured twice, not the experimental manipulation.
Group differences in any of these represents a
core confound.
14
Internal validity, 2
Ensuring Internal Validity 1. Group Assignment
  • Self-selection people may join or drop out of
    groups for reasons other than the independent
    variable.
  • Self-selection in rare, substantial confound if
    present.
  • Self-selection out common confound in behavioral
    studies, e.g., Project EXPLORE and differential
    drop-out of risky MSM from experimental group.
  • Existing groups may differ on variables besides
    independent variable.
  • Naturally occurring convenience samples (e.g.,
    9am class v. 11am class, NYC v. Chicago) may
    differ in subtle ? variables that are difficult
    to assess
  • Naturally occurring groups that express
    phenomenon (those who seek therapy v. not, more /
    less extreme scores at baseline) may differ in
    crucial variables, some of which may be
    measurable.
  • Cures
  • Random assignment to experimental v. control
    groups.
  • Matching participants on key confounding measures
    (e.g., education, age) and systematically
    assigning to groups.
  • Assessment of potential confounds (demographics,
    ? variables).

15
Internal validity procedures
Ensuring Internal Validity 2. Procedures
  • Equality of procedures across experimental
    control group all conditions must be held
    constant except the IV.
  • Participants blind
  • Equalize (control) expectations motivations x
    group
  • Control drop-out, loss to follow-up
  • Experimenter blind
  • Control explicit bias
  • Control self-fulfilling expectations
  • Standardization / automation of experimental
    process
  • All procedures must be independent of the
    participants group assignment
  • Equality of procedures across pre-test and
    post-test

16
Summary Internal validity
  • Internal Validity overview
  • Are results due to something other than the
    Independent Variable?
  • ? Confounds within the experiment
  • Procedural differences x group
  • Biased assignment to group.
  • ? Confounds from outside the experiment
  • History, maturation, cultural change etc.
  • within single-group study
  • differences x group in multi-group study

17
Generalizability general research results
External Validity Can we generalize from this
study to the larger world?
The larger population
Other settings
External Validity How well can we generalize to
18
External validity The larger population
The larger population
  • How well does your research sample represent the
    larger population you want to generalize to.
  • Volunteerism bias people who volunteer for
    research may be unlike the general population.
  • attitudes, motivations
  • responses to financial incentives
  • Convenience sampling of existing groups
  • College class, specific shopping mall, bar or
    other venue
  • Bias by self-selection.
  • Random selection maximizes external validity by
    best representing the population.

We will spend several lectures on sampling later
on.
Cure
19
Random selection v. assignment
  • Key distinction
  • Random selection from a larger population to the
    research sample.
  • Random assignment from the sample to
    experimental v. control groups.

20
External validity social cultural context.
Other settings
How representative (or realistic) is the social
cultural setting of the research?
  • Context is the research setting similar to real
    life settings, or are the results specific to
    this laboratory, this questionnaire, etc.?
  • Procedures are results an artifact of a
    particular procedure, experimenter, or place or
    setting?
  • Replication of study by different researchers, in
    different setting(s), with different samples.
  • Converging studies that test the same hypotheses
    with substantially different methods
  • Field v. lab studies
  • Experimental v. non- (or quasi-) experimental
    methods.
  • Qualitative v. quantitative approaches

Cures
21
External validity the conditions or model
How representative is the Independent Variable
(experimental manipulation)?
  • Modeling the phenomenon
  • does the experimental condition or manipulation
    create the state you want it to?
  • e.g., stress, mood, information, motivation
    manipulation
  • Dose of the IV e.g.
  • drug dose
  • Psychotherapy intensity

Cures
  • Manipulation check
  • Dose-response studies

22
External validity the outcome
Representativeness of the Dependent Variable
  • Operationalization
  • does the assessment of the DV reflect how the
    process works outside of the lab?
  • Construct validity
  • Are you modeling the hypothetical construct you
    intended?
  • How well have you captured a specific ? process?

Cures
  • Standardized measures or assessments e.g.,
    depression, stress
  • Psychometric studies
  • Reliability does the measure consistently yield
    similar scores?
  • Validity does an instrument measure what it is
    intended to?

23
External validity summary
Is the sample typical of the larger population?
The research Sample
Is this typical of real world settings where
the phenomenon occurs?
The research Setting
Is the outcome measure represen-tative, valid
reliable?
The Dependent Variable
The study structure context
Does the experimental manipulation (or measured
predictor) actually create (validly assess) the
phenomenon you are interested in?
24
Generalizability example
  • Core design elements (external validity areas)
  • Sample UIC students
  • Setting Classroom situation

25
Generalizability population and context
How well do the results generalize to
Larger population(s)
People in general
Across contexts
Other Americans
Other University settings
Other structured settings
Other social situations
Other young people
UIC students tested in class
26
Generalizability Independent dependent
variables.
External validity Do the results generalize to
Across forms of anxiety (the IV)
Natural anxiety
Across outcomes (DV)
Other forms of stress
Other cognitive tasks
Less structured leaning tasks
Job or other performance
Other instructions
I.Q. instructions abstract memory task
27
Generalizability of student experiment
  • How well do these data generalize to.

The larger population?
Sample UIC Students
Setting Classroom
Dep. Var. Abstract memory task
Other social or learning settings?
Other cognitive skills or tasks?
The study structure context
Other anxiety conditions?
28
Generalizability general research results
Each element of external validity helps determine
how meaningful research results are.
29
Ecological Validity
The larger population
Other settings
Ecological Validity specifically addresses the
context of research
Other out-comes
Key elements of the research process
Other conditions
  • The researcher
  • The research participant
  • The physical, social and cultural, setting
    research takes place within.

30
Research ecology Researcher Participant
  • Expectations biases
  • Motivations
  • Personal characteristics
  • Cultural background
  • Expectations biases
  • Motivations
  • Personal characteristics
  • Cultural background
  • Similarity or conflict
  • Personal / cultural expectations
  • Social roles
  • Motivations

31
Context effects
  • Time place (field v. lab, medical v. academic)
  • Familiarity or comfort
  • Expectations (e.g., medical setting, formal v.
    informal)
  • Information available
  • Reactive measurement

32
Context effects
  • Complex interaction of Researcher by Participant
    by Context
  • May create very specific conditions under which
    data are collected
  • Can limit External Validity

33
Ecological Validity Researcher Effects
Personal attributes of the researcher
The researcher
  • Biosocial age, race, gender, status... 
  • inherent social conflicts?
  • Representative context for all participants?
  • Psychosocial attitudes, warmth, skills...
  • degree of cooperation
  • participants' understanding of tasks
  • Situational e.g., physician, teacher as
    researcher
  • prior relationship or 'dual role' situation

34

Ecological Validity, 2 Researcher Effects
  • Researchers biases or expectations
  • Knowledge of hypothesis or experimental condition
  • Response to participants attributes

The researcher
  • Self-fulfilling expectations (verbal or
    non-verbal).
  • Rosenthal experiment smart v. dumb rats
    maze learning.
  • Education research powerful effects of teacher
    expectations on student performance.
  • Biased procedures or handling of participants.
  • Clinical research differential handling of
    cases.
  • Mental health research more extreme diagnosis
    treatment recommendations for minorities / lower
    SES pts.
  • Biased data recording quantitative and
    qualitative
  • Non-random errors in data coding or entry
  • Confirmatory biases in recall.

35
Researcher effects
Cures
The researcher
  • Randomize experimenters across condition
  • match or stratify experimenter x participant
  • 'unknown' experimenters
  • Blinding of experimenter(s)
  • double blind
  • Not informing 'hands on' researcher (when
    blinding impossible)
  • Aggressive standardization automation

36
Ecological validity participant effects
  • Participant expectations
  • Motivation to be a 'good' (or bad) subject
  • Social desirability responding
  • Primarily for personal information
  • Cultural personal differences in what is
    considered personal
  • Face to face v. computer assessment
  • Changes in response over time (Doll et al., risk
    disclosure in brief v. full interviews).
  • Infer hypothesis or enrollment criteria
    (correctly or incorrectly)
  • HIV vaccine research
  • Risky men lied to get into low risk vaccine
    cohorts, then showed HIV infections. Did the
    vaccine itself cause infections?
  • Reactive risk behavior concentrated in men who
    believed they received the vaccine.

The participant
37
Participant effects, 2
  • Participants personal characteristics
  • Biosocial similar to experimenter issues
  • age, race, gender, status
  • Possible conflicts
  • Values life experience College students in the
    laboratory
  • knowledge and sophistication
  • Participants ability to understand research
    protocol (also ethical issue).
  • Variables such as psychological mindedness
    behavioral intervention research.

The participant
38
participant effects, 3.
The participant
Cures
  • Blinding participants
  • Constancy of procedures
  • automation or structured protocol
  • training researchers
  • Deception or concealment of hypothesis
  • Diverse sampling of participants
  • Computer assessment

39
Ecological Validity Context and people
Demand characteristics of the research setting
  • Social context powerfully affects individual
    behavior
  • Zimbardo prison experiment, Rosenthal
    psychiatric settings
  • Medical context and health measures e.g, "white
    coat" effect
  • Self-awareness ? norm following
  • Context and informational availability
  • Minimalist social psychology experiments social
    judgments.
  • Survey / interview measures and uni-dimensional
    responding
  • Political / economic demands and simple bias or
    fraud
  • Drug Co. research and cherry picking positive
    results
  • Political pressure for No Child Left Behind
    Houston Miracle

40
Ecological validity setting effects, 2.
Reactive measurement
  • Simple learning of research measures
  • Simple sophistication in self-reporting, test
    taking skills.
  • Awareness of research context and response biases
  • Social desirability response set.
  • Book example responses to erotic stimulus.
  • due to the stimulus itself?
  • self-generated imagery via exp. demand?
  • simulated response?
  • Attributing origins of responses
  • Research measures or procedures can create
    attitude change
  • E.g., Survey questions normalizing

When do you feel it is O.K. to cheat on
an exam?
..when I really do not know the material ..
when others are doing it .. when I think the
exam is unfair
41
Setting effects Cures
  • Cures
  • Clear description of research context to aid
    interpretation
  • Replication of research in other settings / labs
    / researchers
  • Converging studies that test the same hypothesis
    with different methods / contexts, sources of
    participants, measures.

42
Design validity overview
  • Overall research questions
  • Internal validity (confounds)
  • Group assignments
  • Procedures
  • External validity
  • Sample ? population
  • Context Research lab ? real settings
    contexts
  • Conditions Independent variable ? real
    conditions
  • Outcomes Dependent variable ? adequate model
    of phenomenon?
  • Ecological validity (context conditions)
  • Researcher effects
  • Participant effects
  • Setting effects
  • Operational Definitions, hypothetical constructs,
    confounds, etc.
Write a Comment
User Comments (0)
About PowerShow.com