Hypothesis Testing - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

Hypothesis Testing

Description:

... between the groups, indicating the likelihood the treatment was effective. ... The likelihood that any one event will occur, given all the possible outcomes. ... – PowerPoint PPT presentation

Number of Views:142
Avg rating:3.0/5.0
Slides: 34
Provided by: listerhil
Category:

less

Transcript and Presenter's Notes

Title: Hypothesis Testing


1
Hypothesis Testing
  • OT 667

2
Hypothesis testing defined
  • A method for deciding if an observed effect or
    result occurs by chance alone
  • OR
  • if we can argue the results found actually
    happened as a result of an intervention.

3
Hypothesis testing is done through the use of
inferential statistics statistical procedures
that allow estimation of a population
characteristic from sample data
4
The Null Hypothesis
  • In order to decide if the results of an
    experiment occur by chance or if the effects seen
    are the result of a treatment, researchers
    declare a null hypothesis (Ho) and an alternative
    or research hypothesis (Ha).

5
The null hypothesis states that there will be NO
DIFFERENCE between groups as a result of the
treatment.
6
The alternative hypothesis indicates there WILL
be a difference between groups
7
Remember directional questions?
  • Research questions traditionally are asked in 2
    ways
  • 1) NDT treatment will result in better
    outcomes than SI treatment
  • OR
  • 2) There will be a difference between outcomes
    of NDT and SI treatment

8
Dynamic splints will result in better functional
outcomes than static splintsThere will be a
change in play behaviors as a result of parent
education intervention vs direct child
interventionSensory integration treatment will
result in better academic performance than
perceptual motor treatment
9
To test a hypothesis, researchers talk about
rejecting the null in order to demonstrate the
treatment has an effect.
10
The alternative or research hypothesis states
there WILL BE a significant difference in the
outcomes between groups as a result of the
treatment.
11
When you reject the null, you say that there IS a
significant difference between the groups,
indicating the likelihood the treatment was
effective.
12
When you ACCEPT the null, you are saying there is
no difference in the outcomes of a treatment
13
Whether you accept or reject the null is based on
whether the results of a statistical test
performed on the study outcome measures is
greater or less than a preset level of probability
14
Probability
  • Chance of an event happening given all the
    possible outcomes
  • Statistically, probability is a means of
    predicting an outcome
  • Flipping a coin, rolling the dice.

15
..more on probability
  • The likelihood that any one event will occur,
    given all the possible outcomes..
  • The probability something should happen, not the
    probability it will happen!
  • Probability is used as a guideline to make
    judgments about how well sample data estimates
    characteristics of a population

16
Indicating Probability
  • In research, probability of achieving a certain
    outcome is delineated by a small p
  • Setting the level of probability is called
    setting the alpha level
  • When a study outcomes are equal to or smaller
    than the set probability level, those outcomes
    are called statistically significant

17
This means if your statistical test gives you a
value that equates with a p lt .04, the outcome is
statistically significant
18
On the other hand, if your statistical test
results in a p gt .05, like a p .08, the study
results are NOT statistically significant.
19
Be careful with levels of significance.
  • The p value should not be used to indicate the
    validity of the outcome of a study.
  • If your experiment results in a p value of less
    than .001, it supports a relative degree of
    confidence in the decision to reject the null,
    but that is all

20
Interpreting p values..
  • p probability of finding an effect as big as
    the one observed when the null is true
  • p is based on the null being true
  • Interpret p values very carefully

21
Point Estimates
  • The alpha level is a point estimate
  • This means the level of significance is a
    criterion for judging if an observed difference
    between two groups is a real difference or a
    difference due to sampling error

22
Confidence Intervals
  • A way to estimate a range of possible values
    within which the population parameters can be
    found
  • Estimating a range of values within which the
    parameter may lie incorporates the possibility
    for various levels of sampling error

23
The tail of a question
  • One-tailed questions (directional questions) -
    -one intervention will result in a better
  • outcome
  • -are more powerful and more difficult to
  • demonstrate
  • Two-tailed questions (nondirectional questions)-
  • -there will be a difference between
  • treatments
  • -which one results in a better outcome
    not
  • indicated

24
The story of the tails..
  • Statistically, one-tailed tests result in a
    critical value which is compared to values at the
    positive end of the normal curve called the
    critical region
  • The critical values of two-tailed tests are
    compared to values in a critical region at
    either positive or negative end of the normal
    curve

25
One-tailed tests are regarded as more powerful
because there is less chance of a significant
difference between 2 groups, so when you actually
find a significant difference, it is less likely
the result is due to chance alone
26
The Direction of a Question
  • Is specified prior to statistical analysis
  • Cannot be changed once the data analysis begins
  • So you had BETTER be sure of the outcomes when
    you ask a one-tailed question...

27
There is a risk for error in statistical testing.
There are two kinds of error..
28
Type 1 Error
  • When you make a Type 1 error, you reject a null
    hypothesis and say there IS a difference between
    2 groups when in fact there is NO difference.
  • The convention of setting the level of
    significance or alpha level at .05 means the
    researcher can make a Type 1 error 5 times out of
    100

29
Type II Error
  • The risk of accepting a null that is false. That
    means that you say that the results of a study
    are NOT significant when, in fact, they are.
  • Type II error is called beta

30
Statistical Power
  • Statistical power is the probability that a test
    will lead to rejecting the null (saying there IS
    a difference).
  • The more powerful a test, the less likely you are
    to make a Type II error.

31
Items that affect power
  • Variance of a sample (the less the variance, the
    more powerful a test)
  • The significance criterion (alpha level)
  • Sample size (the bigger the better)
  • Effect size

32
Effect Size
  • Measured by the impact of the independent
    variable
  • The effectiveness of the independent variable
    judged by the size of the difference between the
    sample means of two groups
  • Null hypothesis implies an effect size of 0
  • The larger the effect size, the greater the
    effect of the independent variable
  • Effect size is cited in a range of 0-1

33
Parts of Chapter 17 18 on the quiz and the final
exam
  • Chapter 17, 372 383 up to but not including
    Determining areas under the normal curve
  • Chapter 18, pp 387-388
  • 397-409, excluding parametric and nonparametric
    statistics
Write a Comment
User Comments (0)
About PowerShow.com