Title: Randomised Controlled Trials RCTs
1Randomised Controlled Trials (RCTs)
2James Lind
- Born Edinburgh 1716
- On HMS Salisbury in 1747 he allocated 12 men with
scurvy - Cider
- Seawater
- Horseradish, mustard, garlic
- Nutmeg
- Elixir Vitriol
- Oranges and Limes
3Think about
- Consider how you would go about evaluating the
following interventions - Surgical versus medical termination of pregnancy
- Referral guidelines for radiographic examination
- Paracetamol and/or ibuprofen for treating
children with fever - Nurse counsellors as an alternative to clinical
geneticists for genetic counselling - Single dose of chemotherapy versus radiotherapy
treating testicular cancer http//news.bbc.co.uk/
1/hi/health/7647007.stm - Cervical cancer vaccine http//news.bbc.co.uk/1/h
i/health/6223000.stm
4The need to evaluate health care
- Variations in health care
- Unproven treatments
- Inadequacies in care
- Inaccurate medical models
- Limitation of resources
- New innovations
-
- Crombie
(1996)
5Evaluation process
- Define research question
- What is already known?
- Identify appropriate study design
- Define population, intervention and criteria for
evaluation - How large a study?
- Consider measurement of evaluation criteria
(outcomes) - How often?
- Timing? Length of follow up?
- To whom? Who collects the data? What format?
- Analysis of data
- Dissemination and implementation
6Define research question and what is already known
- Research question (PICOT)
- Population
- Intervention
- Control/comparator
- Outcome
- Target
- Has the question already been answered?
- Conduct review to assess what is know about
intervention
7Definition of population, intervention and
outcomes
- Population
- Strict definition (explanatory) or flexible
(pragmatic) - Intervention
- Dose of drug, timing etc
- Outcomes
- Health related Quality of Life
- Biochemical outcomes
- Symptoms
- Physical assessment
- Patient satisfaction
- Acceptability
- Cost-effectiveness
8Measuring outcome
- Questionnaires, interview, medical notes etc
- Timing of questionnaires?
- Baseline (prior to treatment)
- Short term outcomes
- Long term outcomes
- Who collects the data?
9Sources of systematic errors
- Selection bias
- can be introduced by the way in which comparison
groups are assembled - Attrition bias
- systematic differences in withdrawal/follow up
- Performance bias
- Systematic differences in care provided
- Observation/detection bias
- systematic differences in observation,
measurement, assessment
10What is a randomised controlled trial?
- Simple Definition
- A study in which people are allocated at random
to receive one of several interventions - (simple but powerful research tool)
11Simple RCT model
RANDOMLY allocated to experimental or CONTROL
group
EXPERIMENT
CONTROL
12What is a randomised controlled trial?
- Random allocation to intervention groups
- all participants have equal chance of being
allocated to each intervention group - why RCTs are referred to as randomised
controlled trials
13Terminology
- Interventions are comparative regimes within a
trial - Prophylactic, diagnostic, therapeutic e.g.
- preventative strategies
- screening programmes
- diagnostic tests
- drugs
- surgical techniques
14What is a randomised controlled trial?
- One intervention is regarded as control treatment
(the group of participants who receive this are
the control group) - NOTE Contemporaneous (not historical controls)
- why RCTs are referred to as randomised controlled
trials
15Terminology
- Control Group
- can be
- conventional practice
- no intervention (this may be conventional
practice) - placebo
- Experimental Group
- receive new intervention
- (also called treatment group or intervention
group interchangeably)
16What is a randomised controlled trial?
- RCTs are
- Experiments investigators can influence number,
type, regime of interventions - Quantitative measure events rather than try to
interpret them in their natural settings - Comparative compare two or more interventions
17What is a randomised controlled trial?
- More Complex Definition
- Quantitative, comparative, controlled experiments
in which a group of investigators study two or
more interventions in a series of participants
who are allocated randomly to each intervention
group
18Inclusion/exclusion criteria
- Decision rules applied to potential trial
participants to judge eligibility for inclusion
in trial - See CONSORT statement
- www.consort-statement.org
- Important that they are applied identically to
all groups in a trial!
19What is randomisation?
- Randomisation is the process of random allocation
- Allocation is not determined by investigators,
clinicians or participants - Equal chance of being assigned to each
intervention group - Individual people
- patients
- caregivers (physicians, nurses etc)
- Groups of people, cluster randomisation
- (Covered in more depth in later lecture)
20Pseudo-randomisation
- Other allocation methods include
- according to date of birth
- the number on hospital records
- date of invitation etc.
- These are NOT regarded as random
- These are called pseudo- or quasi-random
21Terminology
- Controlled clinical trials (CCTs) are not the
same as randomised controlled trials - Controlled clinical trials include non-randomised
controlled trials and randomised controlled
trials
22Why use randomisation?
- Characteristics similar across groups at baseline
- can isolate and quantify impact of interventions
with effects from other factors minimised - Risk of imbalance not abolished completely even
if perfect randomisation - To combat selection bias
- Unpredictability paradox review of empirical
comparisons of randomised and non-randomised
clinical trials, Kunz and Oxman 1998 BMJ - http//www.bmj.com/cgi/content/abstract/317/7167/1
185
23Why do we need a control group?
- Dont need a control group if completely
predictable results - Parachutes when jumping from plane
- New drug cures a few rabies cases
- But
- No intervention has 100 efficacy
- Many diseases recover spontaneously
24Regression to the mean
- Occurs when an intervention aimed at a group or
characteristic that is very different from
average - For example selecting people because they have
high blood pressure then measuring them in future
will see the blood pressure measurements closer
to the mean of the population - Morton and Torgerson BMJ 2003 3261083-4
- Bland and Altman BMJ 1994 3081499 and 309780
25DISTRIBUTION OF RESULTS
threshold
measurement 1
measurement 2
26Hawthorne Effect
- Experimental effect in the direction expected but
not for the reason expected - Essentially studying/measuring something can
change what you studying/measuring
27Placebo Effect
- Effect (usually, but not always positive)
attributed to the expectation that a therapy will
have an effect - The effect is due to the power of suggestion
- A placebo is an inert medication or procedure
- Waber et al 2008 JAMA Commercial Features of
Placebo and Therapeutic Efficacy - http//jama.ama-assn.org/cgi/content/full/299/9/10
16
28EFFECT OF AN INTERVENTION
Real difference
Signal
Effect size
Therapeutic E.
Placebo E.
Noise
R. mean
Hawthorne E.
Experimental group
Control group
29Minimising bias in RCTs
- Blinding
- Single blind participants are unaware of
treatment allocation - Double blind both participants and
investigators are unaware of treatment allocation - Requires use of placebos in drug trials
Schulz and Grimes (2002)
30Concealment of random allocation list
- Trials with inadequate allocation concealment
have been associated with larger treatment
effects compared with trials in which authors
reported adequate allocation concealment - Schulz KF (1995). Subverting randomisation in
controlled trials. JAMA, 274, 1456-8
31Blinding, placebos
- RCTs should use the maximum degree of blinding
that is possible - Placebo is a dummy treatment given when there
is no obvious standard treatment - needed as the act of taking a treatment may have
some effect -need to attribute - double blind treatments must be indistinguishable
to those affected
32Empirical evidence of bias
33Explanatory and Pragmatic questions
- Explanatory
- Can it work in an ideal setting ..?
- Efficacy
- Hypothesis testing
- Placebo controlled
- Double blind
- Pragmatic
- Does it work in the real world ..?
- Effectiveness
- Choice between alternative approaches to health
care - Standard care
- Open
34Key differences between explanatory and
pragmatic trials (1)
- Explanatory Pragmatic
- Question efficacy effectiveness
- Setting laboratory normal practice
- Participants strictly defined broader, clinically
indicated (uncertainty) - Interventions strictly defined as clinical
practice
35Key differences between explanatory and
pragmatic trials (2)
- Explanatory Pragmatic
- Outcomes short-term long-term, patient- surrogate
s centered and resource orientated - Size small (usually larger
- single centre) (often multi-centre)
- Analysis treatment received intention to treat
- Relevance indirect direct
- to practice
36Example of selection bias for PP in an open trial
White(2005)
Exp
None
E
Ctrl
None
E
37Terminology explanatory versus pragmatic
- Explanatory trials
- estimates efficacy - that is the benefit the
treatment produces under ideal conditions - Pragmatic trials
- estimates effectiveness - that is the benefit the
treatment produces under routine clinical
practice - Roland M, Torgerson D. What are pragmatic
trials? BMJ 1998316285
38RCT as the Gold Standard
- The randomised controlled trial is widely
regarded as the gold standard for evaluating
health care technologies because it allows us to
be confident that a difference in outcome can be
directly attributed to a difference in the
treatments and not due to some other factor
39RCT strengths
- Confounding variables minimised
- Only research design which can in principle yield
causal relationships - can clarify the direction of cause and effect
- Accepted by EBM school
- Dont have to know everything about the
participants
40RCT limitations
- Contamination of intervention groups
- Comparable controls
- Problems with blinding
- What to do about attrition?
- Are patients/professionals willing to be in trial
different from refusers?- external validity - Cost!
41Other issues in RCTs (1)
- Ethics
- Management issues
- Interim analysis and stopping rules
- part of ethical concern
- mechanisms to avoid patient harm
- Data Monitoring and Safety Committee required for
trials - Clemens F et al Data monitoring in randomised
controlled trials surveys of recent practice and
policies. Clin Trials 20052(1)22-33.
42Other issues in RCTs (2)
- A power calculation is essential for the validity
of a trial and will always be necessary for grant
applications and in publications of the trial
(later lecture) - The methods of randomisation should always be
reported. It is not enough to say that the
patients were randomly allocated to the
treatments. (see CONSORT)
43Parallel group (simple) RCT design in practice
Patient eligible for either treatment
Patient gives informed consent
Yes
No
Randomise
Exclude from trial
Experimental treatment
Standard treatment
Standard treatment
44Summary
- Gold standard of research designs
- Individual patients are randomly allocated to
receive the experimental treatment (intervention
group) or the standard treatment (control group) - Maximises the potential for attribution
- Randomisation guards against selection bias
between the two treatment groups - Standard statistical analysis
- Good internal validity
- May lack generalisability due to highly selected
participants - Can be costly to set up and conduct, ethical
issues
45Good study design
- General considerations
- maximise attribution
- Ensure no factor other than the intervention
differs between the intervention and control
group - Random allocation, if adequately carried out,
will in the long run ensure comparable groups
with respect to all factors - minimise all sources of error
- systematic error (bias)
- random error (chance)
- be practical and ethical
46Minimise sources of error
- Systematic errors (bias)
- inaccuracy which is different in its size or
direction in one of the groups under study than
the others - Minimise bias by ensuring that the methods used
are applied in the same manner to all subjects
irrespective of which group they belong to. -
- Random errors (chance)
- Inaccuracy which is similar in the different
groups of subjects being compared - Adequate sample size, accurate methods of
measurement -
- Elwood (1998)
47Study designs
- Experimental (Randomised controlled trial)
- A new intervention is deliberately introduced and
compared with standard care - Quasi-experimental (non-randomised, controlled
before and after) - Researchers do not have full control over the
implementation of the intervention
(opportunistic research) - Observational (Cohort, case-control,
cross-sectional) - describes current practice
- observed differences cannot be attributed solely
to a treatment effect
48Evaluation of health care interventions
- Randomised controlled trials are considered as
the gold standard - However, some debate over the advantages and
disadvantages of different research designs for
assessing the effectiveness of healthcare
interventions - Polarised views
- observational methods provide no useful means of
assessing the value of a therapy (Doll, 1993) - RCTs may be unnecessary, inappropriate,
impossible or inadequate (Black, 1996) - Approaches should be seen as complementary and
not as alternatives (Black, 1996) - Interpretation of RCTs in terms of
generalisability
49Useful/interesting links
- www.jameslindlibrary.org (History)
- www.consort-statement.org (CONSORT)
- www-users.york.ac.uk/mb55/pubs/pbstnote.htm
(All the stats notes from BMJ) - www.ctu.mrc.ac.uk (MRC CTU)
- www.rcrt.ox.ac.uk (under construction)
- Doll R. Clinical Trials the 1948 watershe BMJ
19983171217-1220 - The unpredictability paradox review of empirical
comparisons of randomised and non-randomised
clinical trials Regina Kunz and Andrew D Oxman
BMJ 1998 317 1185-1190
50References
- Black. Why we need observational studies to
evaluate the effectiveness of health care. BMJ
1996 3121215-8 - Crombie. Research in Health Care. 1996
- Doll. Doing more good than harm the evaluation
of health care interventions. Ann NY Acad Sci
1993703310-13 - Elwood M. Critical appraisal of epidemiological
studies and clinical trials. 1998 OUP Oxford. - Greenhalgh T. How to read a paper. 2001 BMJ
London - Schulz and Grimes. Lancet Epidemiology series.
2002