Experimental Studies - PowerPoint PPT Presentation

About This Presentation
Title:

Experimental Studies

Description:

... by study protocol Index Group Exposed to the study treatment Referent/Control Group Not exposed to study treatment Dr. Jonas Salk, 1953 Despite popular belief, ... – PowerPoint PPT presentation

Number of Views:211
Avg rating:3.0/5.0
Slides: 29
Provided by: Gers
Learn more at: https://www.sjsu.edu
Category:

less

Transcript and Presenter's Notes

Title: Experimental Studies


1
  • Chapter 6
  • Experimental Studies

2
Chapter 6 Outline
6.1 Introduction 6.2 Historical perspective 6.3
General concepts 6.4 Data analysis
3
Epi Experiments (Trials)
  • Trials - from the French trier (to try)
  • Clinical trial test therapeutic interventions
    applied to individuals (e.g., chemotherapy trial)
  • Field trial test preventive interventions
    applied to individuals (e.g., vaccine trial)
  • Community trial test interventions applied at
    the aggregate level (e.g., fluoridation of public
    water trial)

4
Illustrative Example 6.1WHI Clinical Trial
  • 40 US clinical centers
  • Recruitment 1993-1998
  • Exposure randomized, double blinded estrogen
    progestin vs identical looking placebo
  • Average follow-up 5.2 years
  • 1 outcome Coronary Heart Disease

5
Survival curves WHI estrogen trial
6
Illustrative Example 6.2 Vitamin A Community
Trial
  • 450 Sumatran villages w ith high childhood
    mortality rates
  • Exposure Vitamin A supplementation program vs.
    no intervention
  • Random allocation of intervention 229 treatment
    villages, 221 control villages

7
Historical perspective
  • Read in text
  • Biblical reference
  • Van Helmonts proposal (1662)
  • James Linds scurvy experiment (1753)
  • Modern trials
  • Polio trail (1954)
  • MRFIT (1982)
  • WHI (2002)

8
Natural Experiments
  • Natural conditions that mimic an experiment
  • Example French surgeon Paré (15101590) ran out
    of boiling oil to treat wounds ? forced to use an
    innocuous lotion for treatment ? noticed vastly
    improved results

Not a true experiment because the intervention
was not allocated by study protocol
9
Selected Concepts Experimental Design
  1. The control group (and the placebo effect)
  2. Randomization comparability
  3. Follow-up and outcome ascertainment
  4. Intention-to-treat vs. per-protocol analysis

10
Treatment Group Control Group
  • The effects of an exposure can only be judged in
    comparison to what would happen in its absence

Treatment Group Exposed to the intervention
Control Group Not exposed to intervention
11
Illustration MRFIT
  • Multiple Risk Factor Intervention Trial (1982)
  • 12,855 high risk men, 35- to 57-years-old
  • Randomly assigned multi-factor Intervention
    (special intervention) group or usual care
    group
  • Study endpoints Coronary Heart Disease (CHD)
    mortality and overall mortality
  • Results described here http//www.ncbi.nlm.nih.go
    v/pubmed/7050440
  • No significant difference in endpoint rates
  • Also, lower than expected rates in both groups
  • Had no control group had been used, the
    intervention might have unjustifiably been
    declared a success

12
Polio Field Trial (1954)
  • Polio rates (per 100,000)
  • Placebo group 69
  • Refusers 46
  • Vaccinated group 28
  • Had Refusers been used as the control group ?
    effects of the intervention would have been
    underestimated
  • Am J Pub Health, 1957, 47 283-7

Dr. Jonas Salk, 1953
13
The placebo effect
  • Improvements attributed to an inert intervention

Despite popular belief, placebos have no real
effect False impressions of placebo effects can
be explained by spontaneous improvement,
fluctuation of symptoms, regression to the mean,
additional treatment, conditional switching of
placebo treatment, scaling bias, irrelevant
response variables, answers of politeness,
experimental subordination, conditioned answers,
neurotic or psychotic misjudgment, psychosomatic
phenomena, misquotation, etc (Kienle Kiene,
1997 )
14
The Hawthorne Effect
Improvements in behavior because subjects know
they are being observed ? effects unrelated to
the intervention Initially observed in industrial
psychology experiments in the 1930 A comparable
attention bias effect is seen in trials
15
Randomization and Comparability
  • Randomization works by balancing potential
    confounding factors in the treatment control
    group
  • ? like-to-like comparisons
  • ? differences observed at completion of trial due
    to the treatment or to chance

16
Checking Group ComparabilityWHI Trial
17
Follow-up Outcome Ascertainment
  • Follow-up ? screening for study outcomes and
    confirming the outcomes as true (adjudication)
  • Study outcomes based on case definitions (uniform
    and valid criteria for case ascertainments)
  • The importance of blinding
  • Single blinding
  • Double blinding
  • Triple blinding

18
Intention-to-treat vs. per-protocol analysis
  • Intention-to-treat (ITT) analyze as
    randomized (regardless of compliance)
  • Per protocol (PP) analyze only those that
    completed the protocol
  • Effectiveness real world effectiveness
    (including non-compliance)
  • Efficacy effect under ideal conditions (e.g.,
    complete compliance)

19
Human Subjects Ethicsnow covered in Ch 5
  • The Belmont Report
  • Respect for individuals
  • Beneficence
  • Justice
  • IRB oversight
  • Data Safety Monitoring Board (DSMB)
  • Informed consent
  • Equipoise

20
Equipoise
  • Equipoise balanced doubt
  • Cannot knowingly expose a participant to harm
  • Cannot withhold known benefit to study subjects
  • Whats left? (ANS equipoise)

Is equipoise the over-riding principles of trial
ethics?
21
Advocacy vs. Scientific Ethics
  • Advocacy, partisan, corporate, advertising, and
    political ethics Plan with the end result in
    mind.
  • Scientific ethics A bending over backwards to
    prove oneself wrong.
  • I cannot give any scientist of any age any
    better advice than this The intensity of the
    conviction that a hypothesis is true has no
    bearing on whether it is true or not.

Sir Peter Medewar
22
Simple Analysis Relative Effect
  • Data WHI trial
  • E HRT vs. placebo
  • D CHD (yes or no)
  • Average follow-up 5.2 years

How to say it HRT increased the risk of CHD by
28 in relative terms.
23
Simple Analysis Absolute Effect
  • Data WHI trial
  • E HRT vs. placebo
  • D CHD (yes or no)
  • Average follow-up 5.2 years

How to say it In absolute terms, there was an
additional 4.22 CHD cases for every thousand
women using HRT over 5.2 years.
24
Simple Analysis Efficacysame as RRD but without
the minus sign
450 Sumatra villages randomly assigned to either
a vitamin A supplementation or not
How to say it Vitamin A supplementation was 34
effective in preventing childhood mortality.
This provides a suitable taking-off point for the
discussion of Rothman, K. J., Adami, H. O.,
Trichopoulos, D. (1998). Should the mission of
epidemiology include the eradication of poverty?
Lancet, 352(9130), 810-813.
25
Simple Analysis Absolute Effect
450 Sumatra villages randomly assigned to either
a vitamin A or control
How to say it The effect was to reduce mortality
by 2.47 deaths per 1000 children over the period
of observation.
26
OpenEpi.com for data analysis
  • Counts menu for incidence proportions,
    prevalences, and case-control data
  • Person Time menu for rate data
  • Descriptive and inferential (confidence intervals
    and P-values) statistics
  • Can be used as a learning tool

27
6.1 Bicycle helmet campaign
You want to test whether a public awareness
campaign about bicycle safety at elementary
schools will increase bicycle helmets use among
school-aged children. To test this intervention,
you identify 12 elementary schools, half of which
will be randomly assigned to participate in a
school-wide bicycle helmet awareness program. The
other 6 schools will serve as controls and will
receive no special intervention. Research
assistants will determine the percentage of
bicyclists wearing helmets at standard locations
in neighborhoods of each of the schools before
and after the intervention. (A) What is the unit
of intervention in this study? (The unit of
intervention refers to the level at which the
intervention is randomized. This may differ from
the unit of observation, which is the unit
upon which the outcome is measured.) (B) What is
the unit of observation in this study? (C) Even
though the intervention was randomized in this
study, there were only 6 treatment schools and 6
controls schools. Therefore, there is a good
chance that treatment and control schools will
differ with respect to important characteristics
such as socioeconomic status. Can you think of a
way to control for socioeconomic status through a
randomization or study design approach?
28
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com