What is Critical Appraisal - PowerPoint PPT Presentation

1 / 60
About This Presentation
Title:

What is Critical Appraisal

Description:

the preciscion and extent to which it is. possible to generalise the results of the ... areas of infertility, homoeopathy, anaesthesia, pain relief,and neonatology ... – PowerPoint PPT presentation

Number of Views:90
Avg rating:3.0/5.0
Slides: 61
Provided by: hej3
Category:

less

Transcript and Presenter's Notes

Title: What is Critical Appraisal


1
What is Critical Appraisal?
  • CRITICAL APPRAISAL
  • QUALITY ASSESSMENT

2
What is quality?
  • quality is a complex concept or
    construct.
  • can
    be acknowledged without difficulty,

  • but it is not easy to define or measure.
  • Quality means different things to different
    people

3
  • 1-The external validity
  • Applicability
  • Generalisability
  • the preciscion and extent to which it is
  • possible to generalise the results of the
  • trial to other settings

4
  • Is related to the definition of
    well-formulate
  • questions
  • population, intervention, outcome
  • It
    determine by
  • inclusion and exclusion criteria

5
  • 2-The internal validity
  • the degree to which the trial design, conduct,
    analysis, and presentation have minimized or
    avoided systematic biases.

6
  • 3-The appropriateness
  • of data analysis and
  • presentation.

7
  • 4-The ethical implications of the intervention
    they evaluate

8
  • 5- precision
  • is a measure of the likelihood of chance
    effects leading to random errors.
  • It is reflected in the confidence interval

9
  • internal validity
  • usually is the only one that has been the
    subject of the methodological studies

10
WHY?
  • 1-to limit bias in conducting the systematic
    review (there is no a perfect trial ).
  • 2-gain insight into potential comparisons
  • 3-guide interpretation of findings

11
RCT
  • Advantages
  • evaluation of a single variable in a precisely
    defined patient group
  • Prospective design
  • Potentially eradicates bias by comparing two
    otherwise identical groups

12
  • Disadvantages
  • Expensive and time consuming
  • Many RCTs, are performed on too few patients, or
    for too short a period
  • Mostly funded by large research bodies
    (university or government sponsored or drug
    companies)

13
What are the main types of bias in RCTs ?
  • Bias can occur during the course of a trial
  • Bias can show during the publication and
    distribution of trials.
  • There is bias in the way readers assess the
    quality of RCTs

14
Sources of bias in trials of healthcare
interventions
  • selection bias
  • performance bias
  • attrition bias
  • detection bias

15
(No Transcript)
16
  • 1-Selection bias
  • result from the way that comparison groups are
    assembled

17
  • The main appeal of the RCT in health care
  • its potential for reducing selection bias.
  • Randomisation, if done properly, can keep
    study groups as similar as possible at the outset
  • No other study design allows researchers to
  • balance unknown prognostic factors at baseline

18
  • . who are recruiting participants and the
    participants should be unaware of the assignment
    until after the decision has been made.
  • after assignment, they should not be able
  • to alter the decision

  • means

19
randomization blinding allocation concealment
  • by someone who is not responsible for
  • recruiting subjects, such as
  • centralized (central office unaware of subject
    characteristics) or pharmacy-controlled
    randomisation
  • on-site computer system combined with allocations
    kept in a locked unreadable computer file that
    can be accessed only after the characteristics of
    an enrolled participant have been entered
  • sequentially numbered, sealed, opaque envelopes

20
  • Empirical research has shown that lack of
    adequate allocation concealment is associated
    with bias (Chalmers 1983, Schulz 1995, Moher
    1998)
  • One study showed that trials with inadequate
    allocation concealment can exaggerate the effects
    of interventions by
  • as much as 40 on average

21
  • allocation concealment is rarely reported and
    perhaps rarely implemented in RCTs.
  • A recent study showed that allocation concealment
    was reported in less than 10 of articles
    describing RCTs published in prominent journals
    in five different languages

22
  • Even when the allocation codes are kept in sealed
    opaque envelopes, investigators, for instance,
    can look through the envelopes using powerful
    lights or even open the envelope using steam and
    reseal it without others noticing.

23
  • inadequate
  • - the use of case record numbers,
  • -dates of birth or day of the week,
  • -open list of random numbers
  • procedure that is entirely
  • transparent before allocation

24
  • Unclear
  • When studies do not report any concealment
    approach

25
  • 2-Performance bias ascertainment
    bias

  • differences in the care provided to the
    participants in the comparison groups and the
    intervention group.

26
Blinding
  • by keeping the people involved in the trial who
    administer the interventions, the participants
    who receive the interventions, and the
    individuals in charge of assessing and recording
    the outcomes
  • unaware of the identity of the interventions
    for as long as possible

27
  • contamination (provision of the intervention to
    the control group)
  • co intervention (provision of unintended
    additional care to either comparison group)
  • can affect study results (CCSG 1978, Sackett
    1979b

28
  • there is evidence that participants who are aware
    of their assignment status report more symptoms,
    leading to biased results (Karlowski 1975)
  • studies that are not double-blinded can
    exaggerate effect estimates by 17.

29
  • only about half of the trials that could be
    double-blinded actually achieved double-blinding.
  • Even when the trials are described as
    double-blind, mostly do not provide adequate
    information on how blinding was achieved or
    statements on the perceived success (or failure)
    of double-blinding efforts

30
Differences betweenblindingS
  • eliminate selection bias.
  • reduces performance and detection biases.
  • is always possible, regardless of the study
    question
  • after allocation may be impossible, as in trials
    comparing surgical with medical treatment.
  • control is relevant to the trial as a whole, and
    thus to all outcomes being compared.
  • control is often outcome-specific and may be
    accomplished successfully for some outcomes in a
    study but not others.

31
  • 3-Attrition bias
  • Exclusion bias
  • differences between the comparison groups in
    the loss of participants from the study.
  • Ideally, all participants in a trial should
    complete the studyIn reality, however, most
    trials have missing data.

32
  • Data can be missing because
  • some of the participants drop out before the end
    of the trial
  • participants do not follow the protocol either
    deliberately or accidentally
  • some outcomes are not measured correctly or
    cannot be measured at all at one or more
    time-points.

33
  • intention to treat analysis
  • all the study participants are included in the
    analyses as part of the groups to which they
    were randomised regardless of whether they
    completed the study or not.

34
  • worst case scenario sensitivity analysis
  • assigning
  • the worst possible outcomes to the missing
    patients or time-points in the group that shows
    the best results,
  • and
  • the best possible outcomes to the missing
    patients or time-points in the group with the
    worst results,
  • and
  • evaluating whether the new analysis
    contradicts or supports the results of the
    initial analysis which does not take into account
    the missing data

35
  • If the decisions on withdrawals have been made
    because of knowledge of the interventions
    received by the participants, this constitutes
    yet another cause of ascertainment bias.

36
  • 4-Detection bias
  • differences between the comparison groups in
    outcome assessment.

37
  • Blinding the people who will assess
  • outcomes .
  • particularly important in research
  • with subjective outcome measures
  • such as pain .(Karlowski 1975, Colditz 1989,
    Schulz

  • 1995)

38
Bias in non rct
  • randomisation is the only means of allocation
    that controls for unknown and unmeasured
    confounders as well as those that are known and
    measured
  • It is possible to control for confounders that
    are known and measured in observational studies,
    such as case-control and cohort studies by
    matching.

39
  • Various criteria have been suggested to
    critically appraise the validity of observational
    studies (Horwitz 1979, Feinstein 1982, Levine
    1994, Bero 1999)

40
  • The major difference between randomised trials
    and observational studies
  • selection bias
  • reviewers must make judgments about what
    confounders are important and the extent to which
    these were appropriately measured and controlled
    for

41
  • performance bias is also more difficult in
    observational studies
  • it is necessary to measure exposure to the
    intervention of interest and ensure that there
    were not differences in the exposure to other
    factors
  • blinding, is similar to those in RCT
  • Measurement of exposures in a similar and
    unbiased.

42
  • attrition bias is similar in RCT

43
  • detection bias is similar in RCT

44
  • Despite these concerns,
  • there is sometimes good reason to rely on
    observational studies, and to include such
    studies in Cochrane reviews.
  • For example, well designed observational
    studies such as,
  • screening for cervical cancer,
  • dissemination of clinical practice guidelines to
    change professional practice
  • rare adverse effects of medication.

45
TOOLS
  • Checklist the components are evaluated
    separately and do not have numerical scores
    attached to them
  • Scale each item is scored numerically and an
    overall quality score is generated.

46
  • There are many tools to choose from when
    assessing trial quality,
  • The advantage
  • - simple and always yields an assessment tool.
  • The disadvantage
  • - using this informal approach can produce
    variable assessments of the same trials when used
    by multiple individuals,
  • - may not be able to discriminate between
    studies with good and poor quality

47
  • You can create the tool by selecting a group of
    items according to your definition of quality,
    decide how to score each item, and use the tool
    straightaway
  • The advantages
  • -can yield tools with known reliability
  • -construct validity which would allow readers
    to
  • discriminate among trials of varied quality
  • The disadvantage
  • -time-consuming process

48
  • A systematic search of the literature identified
    9 checklists and 25 scales for assessing trial
    quality . (Moher1995 )
  • These scales and checklists include anywhere from
    3 to 57 items and take from 10 to 45 minutes to
    complete.
  • there are now at least twice as many scales for
    assessing trial quality

49
  • Checklists for appraising primary clinical
    research are now well-established
  • for example, Journal of the American
  • Medical Association (JAMA) ,jadad scale,
  • pedro scale, oxford pain validity scale,
    consort
  • These checklists are around the type of question
    (therapy, prevention, diagnosis, aetiology, or
    prognosis)

50
  • scoring is based on reporting (rather than doing
    appropriately in the study.
  • Many also contain items that are not directly
    related to validity, such as
  • whether a power calculation was done (relates
    more to the precision of the results)
  • whether the inclusion and exclusion criteria were
    clearly described (relates more to applicability
    than validity).

51
JADAD
  • easy and quick to use (it takes less than five
    minutes to score a trial report)
  • provides consistent measurements has construct
    validity
  • areas of infertility, homoeopathy, anaesthesia,
    pain relief,and neonatology
  • in sets of trials published in five different
    languages.

52
(No Transcript)
53
  • It has been shown that studies that obtain 2 or
    less points are likely to produce treatment
    effects which are 35 larger than those produced
    by trials with 3 or more points.

54
Who should do the assessments and how?
  • 1-number and background of raters,
  • observers, or assessors .
  • usually two people and independently.
  • to minimise the number of mistakes during the
    assessments
  • Reaching agreement is usually easy, but it may
    require a third person to act as arbiter

55
  • 2-Masking the reports (without the name
  • of the authors, institutions, sponsorship,
    publication year and journal, or study results)
  • reduce the likelihood of bias marginally
  • but it also increase the resources required to
    conduct the
  • assessments

56
Incorporating assessments of study validity in
reviews
  • as a threshold for inclusion of studies
  • as a possible explanation for differences in
    results between studies
  • in sensitivity analyses
  • as weights in statistical analysis of the study
    results

57
  • If reviewers raise the methodological cut-point
    for including studies, there will be less
    variation in validity among the included reports.

58
Limitations of quality assessment
  • inadequate reporting of trials
  • limited empirical evidence of a relationship
    between parameters thought to measure validity
    and actual study outcomes

59
  • BUT
  • there is empirical evidence that, on average,
    both inadequate concealment of allocation and
    lack of double blinding result in over-estimates
    of the effects of treatment

60
THANKS
Write a Comment
User Comments (0)
About PowerShow.com