Title: Systematic Reviews Critical appraisal
1Systematic ReviewsCritical appraisal
2Formulating review questions
Searching selecting studies
Data collection
Quality assessment
Extracting data from studies
Data Synthesis
3Roadmap
- Bias
- External validity
- Chance error
- Quality assessment tools
- Key messages
- Discussion questions
4Roadmap
- Bias
- External validity
- Chance error
- Quality assessment tools
- Key messages
- Discussion questions
5(No Transcript)
6Source Population
Who took part?
7Source Population
Eligible Population
Who took part?
8Source Population
Participants
Eligible Population
Who took part?
9GATE approach design
Exposure Grp (intervention)
Participants
Comparison Grp (control)
What groups were compared?
10Outcomes? -
EG
Participants
CG
What outcomes were assessed?
11PICO
4.Outcomes -
2. Intervention 3. Comparison
1. Participants
12Outcomes -
measurement
EG CG
A B
Participants
Time
selection
confounding
measurement
13part question
- P General population
- I (E) - does education
- C - compared with no education
- O - reduce the risk of HIV
14Outcomes -
measurement
EG CG
A B
Participants
Time
selection
confounding
measurement
15Roadmap
- Bias
- External validity
- Chance error
- Quality assessment tools
- Key messages
- Discussion questions
16How we investigate a research question?
Study
What they did
What you see
What they tested
17External Validity
Theory
Cause Construct
Effect Construct
cause-effect construct
- Can we generalize to other persons, places, times?
18How Do We Generalize?
specified persons, places , times
Population
19How Do We Generalize?
Population
draw sample
Sample
draw sample
20How Do We Generalize?
generalize back
generalize back
Population
Sample
21How Do We Generalize?
Our Study
22How Do We Generalize?
settings
Our Study
times
people
places
23How Do We Generalize?
less similar
settings
Our Study
less similar
less similar
times
people
places
less similar
24How Do We Generalize?
less similar
settings
Our Study
less similar
less similar
times
people
places
Gradients of Similarity
less similar
25How Do We Generalize?
generalize back
generalize back
Population
Sample
26Source Population
Who took part?
27Roadmap
- Bias
- External validity
- Chance error
- Quality assessment tools
- Key messages
- Discussion questions
28(No Transcript)
29Statistical measures of chance I(Test of
statistical significance)
Type I error
Type II error
30Dealing with chance error
- During design of study
- Sample size
- Power
- During analysis (Statistical measures of chance)
- Test of statistical significance (P value)
- Confidence intervals
31P-value
- the probability the observed results occurred by
chance - statistically non-significant results are not
necessarily attributable to chance due to small
sample size
32Statistical Power
- Power 1 type II error
- Power 1 - ß
33Power
high
low
34Power
high
low
35Power
Very high
High enough
36P value
- 0.00001
- Clinical Importance
- VS
- Statistical Significance
37Question?
- 20 out of 100 participants 20
- 80 out of 400 participants 20
- 2000 out of 10000 participants 20
- What is the difference?
3895 Confidence Interval (95 CI)
- 20 out of 100 participants 20
- 95 CI 12 to 28
- 80 out of 400 participants 20
- 95 CI 16 to 24
- 2000 out of 10000 participants 20
- 95 CI 19.2 to 20.8
39- Confidence Interval
- vs
- P value
40Roadmap
- Bias
- External validity
- Chance error
- Quality assessment tools
- Key messages
- Discussion questions
41Quality assessment tools
- Checklist the components are evaluated
separately and do not have numerical scores
attached to them - Scale each item is scored numerically and an
overall quality score is generated. -
42Validity assessment can be used - as a
threshold for inclusion of studies - as a
possible explanation for heterogeneity - in
sensitivity analyses - as weights in
meta-analysis
43- You can create the tool by selecting a group of
items according to your definition of quality,
decide how to score each item, and use the tool
straightaway
44- A systematic search of the literature identified
9 checklists and 25 scales for assessing trial
quality . (Moher1995 ) - These scales and checklists include anywhere from
3 to 57 items and take from 10 to 45 minutes to
complete.
45http//ssrc.tums.ac.ir/systematicreview
46http//ssrc.tums.ac.ir/systematicreview
47Limitations of quality assessment
- scoring is based on reporting (rather than doing
appropriately in the study.
48Trial quality and estimated treatment effect
Generation of allocation sequence
Empirical studies show that inadequate quality of
trials may distort the results of trials Juni et
al BMJ 2001, 32342-6
Concealment of allocation
Double blinding
Note RORlt1 indicates that inadequate trial
design was associated with larger estimated
treatment effects
49Roadmap
- Bias
- External validity
- Chance error
- Quality assessment tools
- Key messages
- Discussion questions
50Key messages
- Different aspects of quality must considered in a
study review - Sources of bias and chance error must be
considered for quality assessment of a study - External validity is more or less a conceptual
concept
51Discussion questions
- When we can look at external validity of a study?
- Can we use quality scores for weighting studies?
- Can we create a quality assessment tool for our
specific study? - How we must deal with unpublished information
which could be necessary for quality judgment?