III.Completely Randomized Design (CRD) - PowerPoint PPT Presentation

1 / 105
About This Presentation
Title:

III.Completely Randomized Design (CRD)

Description:

III. Completely Randomized Design (CRD) III.A Design of a CRD III.B Models and estimation for a CRD III.C Hypothesis testing using the ANOVA method – PowerPoint PPT presentation

Number of Views:1156
Avg rating:3.0/5.0
Slides: 106
Provided by: ChrisB282
Category:

less

Transcript and Presenter's Notes

Title: III.Completely Randomized Design (CRD)


1
III. Completely Randomized Design (CRD)
  • III.A Design of a CRD
  • III.B Models and estimation for a CRD
  • III.C Hypothesis testing using the ANOVA method
  • III.D Diagnostic checking
  • III.E Treatment differences

2
III.A Design of a CRD
  • Definition III.1 An experiment is set up using
    a CRD when each treatment is applied a specified,
    possibly unequal, number of times, the particular
    units to receive a treatment being selected
    completely at random.
  • Example III.1 Rat experiment
  • Experiment to investigate 3 rat diets with 6
    rats
  • Diet A, B, C will have 3, 2, 1 rats,
    respectively.

3
Use R to obtain randomized layouts
  • How to do this is described in Appendix B,
    Randomized layouts and sample size computations
    in , for all the designs that will be covered in
    this course, and more besides.

4
R functions and output to produce randomized
layout
  • gt Obtaining randomized layout for a CRD
  • gt
  • gt n lt- 6
  • gt CRDRat.unit lt- list(Rat n)
  • gt Diet lt- factor(rep(c("A","B","C"),
  • gt times
    c(3,2,1)))
  • gt CRDRat.lay lt- fac.layout(unrandomizedCRDRat.uni
    t,
  • gt randomizedDiet,
    seed695)
  • gt CRDRat.lay
  • fac.layout from dae package produces the
    randomized layout.
  • unrandomized gives the single unrandomized factor
    indexing the units in the experiment.
  • randomized specifies the factor, Diets, that is
    to be randomized.
  • seed is used so that the same randomized layout
    for a particular experiment can be generated at a
    later date. (01023)

5
Randomized layout
  • Units Permutation Rat Diet
  • 1 1 4 1 A
  • 2 2 1 2 C
  • 3 3 5 3 B
  • 4 4 3 4 A
  • 5 5 6 5 A
  • 6 6 2 6 B
  • remove Diet object in workspace to avoid using
    it by mistake
  • remove(Diet)

6
III.B Models and estimation for a CRD
  • The analysis of CRD experiments uses
  • least-squares or maximum likelihood estimation of
    the parameters of a linear model
  • hypothesis testing based on the ANOVA method or
    maximum likelihood ratio testing.
  • Use rat experiment to investigate linear models
    and the estimation of its parameters.

7
a) Maximal model
  • Definition III.2 The maximal expectation model
    is the most complicated model for the expectation
    that is to be considered in analysing an
    experiment.
  • We first consider the maximal model for the CRD.

8
Example III.1 Rat experiment (continued)
  • Suppose, the experimenter measured the liver
    weight as a percentage of total body weight at
    the end of the experiment.
  • The results of the experiment are as follows
  • The analysis based on a linear model, that is
  • Trick is what are X and q going to be?

9
Perhaps?
  • Note numbering of Y's does not correspond to
    Rats does not affect model but neater.
  • This model can then be fitted using simple linear
    regression techniques.

10
Using model to predict
  • However, does this make sense?
  • means that, for each unit increase in diet,
    liver weight decreases by 0.120.
  • sensible only if the diets differences are based
    on equally spaced levels of some component
  • For example, if the diets represent 2, 4 and 6 mg
    of copper added to each 100g of food
  • But, no good if diets unequally spaced (2, 4, and
    10 mg Cu added) or diets differ qualitatively.

11
Regression on indicator variables
  • In this method the explanatory variables are
    called factors and the possible values they take
    levels.
  • Thus, we have a factor Diet with 3 levels A, B,
    C.
  • Definition III.3 Indicator variables are formed
    from a factor
  • create a variable for each level of the factor
  • the values of a variable are either 1 or 0, 1
    when the unit has the particular level for that
    variable and 0 otherwise.

12
Indicator-variable model
  • Hence
  • EYi ak, varYi s2, covYi, Yj 0, (i ?
    j)
  • Can be written as
  • Model suggests 3 different expected (or mean)
    values for the diets.

13
General form of X for CRD
  • For the general case of a set of t Treatments
    suppose Y is ordered so that all observations
    for
  • 1st treatment occur in the first r1 rows,
  • 2nd treatment occur in the next r2 rows,
  • and so on with the last treatment occurring in
    the last rt rows.
  • i.e. order of systematic layout (prior to
    randomization)
  • Then XT given by the following partitioned matrix

(1 only ever vector, but 0 can be matrix)
14
Still a linear model
  • In general, the model for the expected values is
    still of the general form EY Xq
  • and on assuming Y is
  • can use standard least squares or maximum
    likelihood estimation

15
Estimates of expectation parameters
  • OLS equation is
  • Can be shown, by examining the OLS equation, that
    the estimates of the elements of a and y are the
    means of the treatments.
  • Example III.1 Rat experiment (continued)
  • The estimates of a are

16
Example III.1 Rat experiment (continued)
  • The estimates of the expected values, the fitted
    values, are given by

17
Estimator of the expected values
  • In general, where is the n-vector
    consisting of the treatment means for each unit
    and
  • being least squares, this estimator can be
    written as a linear combination of Y.
  • that is, can be obtained as the product of an
    matrix and the n-vector Y.
  • let us write
  • M for mean because MT is the matrix that replaces
    each value of Y with the mean of the
    corresponding treatment.

18
General form of mean operator
  • Can be shown that the general form of MT is
  • MT is a mean operator as it
  • computes the treatment means from the vector to
    which it is applied and
  • replaces each element of this vector with its
    treatment mean.

19
Estimator of the errors
  • The estimator of the random errors in the
    observed values of Y is, as before, the
    difference from the expected values.
  • That is,

20
Example III.1 Rat experimentAlternative
expression for fitted values
21
Residuals
  • Fitted values for orthogonal experiments are
    functions of means.
  • Residuals are differences between observations
    and fitted values

22
b) Alternative indicator-variable, expectation
models
  • For the CRD, two expectation models are
    considered
  • First model is minimal expectation model
    population mean response is same for all
    observations, irrespective of diet.
  • Second model is the maximal expectation model.

23
Minimal expectation model
  • Definition III.4 The minimal expectation model
    is the simplest model for the expectation that is
    to be considered in analysing an experiment.
  • The minimal expectation model is the same as the
    intercept-only model given for the single sample
    in chapter I, Statistical inference.
  • Will be this for all analyses we consider.

24
Marginal models
  • In regression case obtained marginal models by
    zeroing some of the parameters in the full model.
  • Here this is not the case.
  • Instead impose equality constraints.
  • Here simply set ak m
  • That is, intercept only model is the special case
    where all ak s are equal.
  • Clear for getting EYi m from EYi ak.
  • What about yG from yT?
  • If replace each element of a with m, then a
    1tm.
  • So yT XTa XT1tm XGm.
  • Now marginality expressed in the relationship
    between XT and XG as encapsulated in definition.

25
Marginality of models (in general)
  • Definition III.5 Let C(X) denote the column
    space of X.
  • For two models, y1 ? X1q1 and y2 ? X2q2, the
    first model is marginal to the second if C(X1) ?
    C(X2) irrespective of the replication of the
    levels in the columns of the Xs,
  • That is if the columns of X1 can always be
    written as linear combinations of the columns of
    X2.
  • We write y1 ? y2.
  • Note marginality relationship is not symmetric
    it is directional, like the less-than relation.
  • So while y1 ? y2, y2 is not marginal to y1 unless
    y1 ? y2.

26
Marginality of models for CRD
  • yG is marginal to yT or yG ? yT because C(XG) ?
    C(XT)
  • in that an element from a row of XG is the sum of
    the elements in the corresponding row of XT
  • and this will occur irrespective of the
    replication of the levels in the columns of XG
    and XT.
  • So while yG ? yT, yT is not marginal to yG as
    C(XT) ? C(XG) so that yT ? yG.
  • In geometrical terms, C(XT) is a
    three-dimensional space and C(XG) is a line, the
    equiangular line, that is a subspace of C(XT).

27
III.C Hypothesis testing using the ANOVA method
  • Are there significant differences between the
    treatment means?
  • This is equivalent to deciding which of our two
    expectation models best describes the data.
  • We now perform the hypothesis test to do this for
    the example.

28
Analysis of the rat example Example III.1 Rat
experiment (continued)
  • Step 1 Set up hypotheses
  • H0 aA aB aC m (yG ? XGm)
  • H1 not all population Diet means are equal
  • (yD ? XDa)
  • Set a 0.05

29
Example III.1 Rat experiment (continued)
  • Step 2 Calculate test statistic
  • From table can see that (corrected) total
    variation amongst the 6 Rats is partitioned into
    2 parts
  • variance of difference between diet means and
  • the left-over (residual) rat variation.
  • Step 3 Decide between hypotheses
  • As probability of exceeding F of 3.60 with n1
    2 and n2 3 is 0.1595 gt a 0.05, not much
    evidence of a diet difference.
  • Expectation model that appears to provide an
    adequate description of the data is yG ? XGm.

30
b) Sums of squares for the analysis of variance
  • From chapter I, Statistical inference, an SSq
  • is the SSq of the elements of a vector and
  • can write as the product of transpose of a column
    vector with original column vector.
  • Estimators of SSqs for the CRD ANOVA are SSqs of
    following vectors (cf ch.I)

where Ds are n-vectors of deviations from Y
and Te is the n-vector of Treatment effects.
Definition III.6 An effect is a linear
combination of means with a set of effects
summing to zero.
31
SSqs as quadratic forms
  • Want to show estimators of all SSqs can be
    written as Y?QY.
  • Is product of 1?n, n?n and n?1 vectors and
    matrix, so is 1?1 or a scalar.
  • Definition III.7 A quadratic form in a vector Y
    is a scalar function of Y of the form Y?AY where
    A is called the matrix of the quadratic form.

32
SSqs as quadratic forms (continued)
  • Firstly write
  • That is, each of the individual vectors on which
    the sums of squares are based can be written as
    an M matrix times Y.
  • These M matrices are mean operators that are
    symmetric and idempotent M' M and M2 M in
    all cases.

33
SSqs as quadratic forms (continued)
  • Then
  • Given Ms are symmetric and idempotent, it is
    relatively straightforward to show so are the
    three Q matrices.
  • It can also be shown that

34
SSqs as quadratic forms (continued)
  • Consequently obtain the following expressions for
    the SSqs

35
SSqs as quadratic forms (continued)
  • Theorem III.1 For a completely randomized
    design, the sums of squares in the analysis of
    variance for Units, Treatments and Residual are
    given by the quadratic form

Proof follows the argument given above.
36
Residual SSq by difference
  • That is, Residual SSq Units SSq - Treatments
    SSq.

37
ANOVA table construction
  • As in regression, Qs are orthogonal projection
    matrices.
  • QU orthogonally projects the data vector into the
    n-1 dimensional part of the n-dimensional data
    space that is orthogonal to equiangular line.
  • QT orthogonally projects data vector into the t-1
    dimensional part of the t-dimensional Treatment
    space, that is orthogonal to equiangular line.
    (Here the Treatment space is the column space of
    XT.)
  • Finally, the matrix orthogonally
    projects the data vector into the n-t dimensional
    Residual subspace.
  • That is, Units space is divided into the two
    orthogonal subspaces, the Treatments and Residual
    subspaces.

38
Geometric interpretation
  • Of course, the SSqs are just the squared lengths
    of these vectors
  • Hence, according to Pythagoras theorem, the
    Treatments and Residual SSqs must sum to the
    Units SSq.

39
Example III.1 Rat experiment (continued)
  • Vectors for computing the SSqs are
  • Total Rat deviations, Diet Effects and Residual
    Rats deviations are projections into Rats, Diets
    and Residual subspaces of dimension 5, 2 and 3,
    respectively.
  • Squared length of projection SSq
  • Rats SSq is Y'QRY 0.34
  • Diets SSq is Y'QDY 0.24
  • Residual SSq is

Exercise III.3 is similar example for you to try
40
c) Expected mean squares
  • Have an ANOVA in which we use F ( ratio of MSqs)
    to decide between models.
  • But why is this ratio appropriate?
  • One way of answering this question is to look at
    what the MSqs measure?
  • Use expected values of the MSqs, i.e. EMSqs, to
    do this.

41
Expected mean squares (contd)
  • Remember expected value population mean
  • Need EMSqs
  • under the maximal model
  • and the minimal model
  • Similar to asking what is EYi?
  • Know answer is EYi ak.
  • i.e. in population, under model, average value of
    Yi is ak.
  • So for Treatments, what is EMSq?
  • The EMSqs are the mean values of the MSqs in
    populations described by the model for which they
    are derived
  • i.e. an EMSq is the true mean value
  • it depends on the model parameters.

42
Expected mean squares (contd)
  • Need EMSqs
  • under the maximal model
  • and the minimal model
  • The EMSqs are the mean values of the MSqs in
    populations described by the model for which they
    are derived
  • i.e. an EMSq is the true mean value and it
    depends on the model parameters.
  • This is analogous to saying that the expected
    value of the Treatment mean vector in populations
    described by the model is y.
  • That is,

43
Expected mean squares (contd)
  • Need EMSqs
  • under the maximal model
  • and the minimal model
  • The EMSqs are the mean values of the MSqs in
    populations described by the model for which they
    are derived
  • This is analogous to saying that the expected
    value of the Treatment mean vector in populations
    described by the model is y.
  • That is,
  • Put another way, what is the mean of the sampling
    distribution of
  • the Treatment mean vector?
  • the Treatment mean square?
  • (Note that both the mean vector and the mean
    square are functions of Y and so are random
    variables.)
  • The answer is easy for the treatment mean vector
    y
  • But what for the Treatment mean square? Ans.
    EMSq

44
EMSqs under the maximal model
  • So if we had the complete populations for all
    Treatments and computed the MSqs, the value of
  • the Residual MSq would equal s2
  • the Treatment MSq would equal s2 qT(y).
  • So that the population average value of both MSqs
    involves s2, the uncontrolled variation amongst
    units from the same treatment.
  • But what about q in Treatments EMSq.

45
The qT(y) function
  • Subscript T indicates the Q matrix on which
    function is based
  • but no subscript on the y in qT(y),
  • because we will determine expressions for it
    under both the maximal (yT) and alternative
    models (yG).
  • That is, y in qT(y) will vary.
  • Numerator is same as the SSq except that it is a
    quadratic form in y instead of Y.
  • To see what this means want expressions in terms
    of individual parameters.
  • Will show that under the maximal model (yT)
  • and under the minimal model (yG) that

46
Example III.1 Rat experiment (continued)
  • The latter is just the mean of the elements of
    yT.
  • Actually, the quadratic form is the SSQ of the
    elements of vector

When will the SSq be zero?
47
The qT(y) function
  • Now want to prove the following result
  • As QT is symmetric and idempotent,
  • y'QTy (QTy)'QTy
  • qT(y) is the SSq of QTy, divided by (t-1).
  • QTy (MT MG)y MTy MGy
  • MGy replaces each element of y with the grand
    mean of the elements of y
  • MTy replaces each element of y with the mean of
    the elements of y that received the same
    treatment as the element being replaced.

48
The qT(y) function (continued)
  • Under the maximal model (yT)
  • Under the minimal model (yG m1n)
  • MGyG MTyG yG so y'GQTyG 0 and qT(y) 0

49
Example III.1 Rat experiment (continued)
50
How qT(yT) depends on the as
  • qT(yT) is a quadratic form and is basically a sum
    of squares so that it must be nonnegative.
  • Indeed the magnitude of depends on the size of
    the differences between the population treatment
    means, the aks
  • if all the aks are similar they will be close to
    their mean,
  • whereas if they are widely scattered several will
    be some distance from their mean.

51
EMsqs in terms of parameters
  • Could compute population mean of MSq if knew aks
    and s2.
  • Treatment MSq will on average be greater than the
    Residual MSq
  • as it is influenced by both uncontrolled
    variation and the magnitude of treatment
    differences.
  • The quadratic form qT(y) will only be zero when
    all the as are equal, that is when the null
    hypothesis is true.
  • Then the EMSqs under the minimal model are
    equal so that the F value will be approximately
    one.
  • Not surprising if think about a particular
    experiment.

52
Example III.1 Rat experiment (continued)
  • So what can potentially contribute to the
    difference in the observed means of 3.1 and 2.7
    for diets A and C?
  • Answer
  • Obviously, the different diets
  • not so obvious that differences arising from
    uncontrolled variation also contribute as 2
    different groups of rats involved.
  • This is then reflected in EMSq in that it
    involves s2 and the "variance" of the 3 effects.

53
Justification of F-test
  • Thus, the F test involves asking the question "Is
    the variance in the sample treatment means gt can
    be expected from uncontrolled variation alone?".
  • If the variance is no greater, concluded qT(y)
    0
  • the minimal model is the correct model since the
    expected Treatment MSq under this model is just
    s2.
  • Otherwise, if the variance is greater, qT(y) is
    nonzero
  • the maximal model is required to describe the
    data.
  • Similar argument to examining dotplots for
    Example II.2, Paper bag experiment.

54
d) Summary of the hypothesis test
  • see notes

55
d) Summary of the hypothesis test
  • Summarize the ANOVA-based hypothesis test for a
    CRD involving t treatments and a total of n
    observed units.
  • Step 1 Set up hypotheses
  • H0 a1 a2 ... a3 m (yG ? XGm)
  • H1 not all population Treatment means are equal
    (yT ? XTa)
  • Set a.

56
Summary (continued)
Step 2 Calculate test statistic
  • Note that EMSqs under the maximal model are
    included in this table, it being recognized that
    when H0 is true qT(y) 0.
  • Step 3 Decide between hypotheses
  • Determine probability of observed F value that
    has n1 numerator d.f. t-1 and n2
    denominator d.f. n-t.
  • If

then the evidence suggests that the null
hypothesis be rejected and the alternative model
be used to describe the data.
57
e) Comparison with traditional one-way ANOVA
  • Our ANOVA table is essentially the same as the
    traditional table
  • the values of the F statistic from each table are
    exactly the same.
  • Labelling differs and the Total would normally be
    placed at the bottom of the table, not at the top.
  • Difference is symbolic
  • Units term explicitly represents a source of
    uncontrolled variation differences between
    Units.
  • Our table exhibits the confounding in the
    experiment.
  • Indenting of Treatments under Units signifies
    that treatment differences are confounded or
    mixed-up with unit differences.

58
f) Computation of the ANOVA in R
  • Begin with data entry (see Appendix A,
    Introduction to R and, Appendix C, Analysis of
    designed experiments in R)
  • Next an initial graphical exploration using
    boxplots defer to a second example with more
    data.
  • ANOVA while function lm could be used, function
    aov is preferred for analysing data from a
    designed experiment.
  • Both use a model formula of the form
  • Response variable explanatory variables (and
    operators)
  • So far expressions on right fairly simple one
    or two explanatory variables separated by a .
  • Subtlety with analysis of designed experiments
  • If explanatory variable is a numeric, such as a
    numeric vector, then R fits just one coefficient
    for it.
  • For a single explanatory variable, a
    straight-line relationship fitted.
  • If explanatory variable is categorical, such as a
    factor, a coefficient is fit for each level of
    the variable indicator variables are used.
  • In analyzing a CRD important that the treatment
    factor is stored in a factor object, signalling R
    to use indicator variables

59
f) Computation of the ANOVA in R (continued)
  • There are two ways in which this analysis can be
    obtained using the aov function
  • without and with an Error function in the model
    formula.
  • Error function used in model formula to specify a
    model for the Error in the experiment (a model
    for uncontrolled variation).
  • summary function is used and this produces ANOVA
    table.
  • model.tables function used to obtain tables of
    means.

60
Example III.1 Rat experiment (continued)
  • Following commands to perform the two analyses of
    the data
  • AOV without Error
  • Rat.NoError.aov lt- aov(LiverWt Diet,
    CRDRat.dat)
  • summary(Rat.NoError.aov)
  • AOV with Error
  • Rat.aov lt- aov(LiverWt Diet Error(Rat),
    CRDRat.dat)
  • summary(Rat.aov)
  • model.tables(Rat.aov, type "means")

61
Output
  • gt AOV without Error
  • gt
  • gt Rat.NoError.aov lt- aov(LiverWt Diet,
    CRDRat.dat)
  • gt summary(Rat.NoError.aov)
  • Df Sum Sq Mean Sq F value Pr(gtF)
  • Diet 2 0.240000 0.120000 3.6 0.1595
  • Residuals 3 0.100000 0.033333
  • gt
  • gt AOV with Error
  • gt
  • gt Rat.aov lt- aov(LiverWt Diet Error(Rat),
    CRDRat.dat)
  • gt summary(Rat.aov)
  • Error Rat
  • Df Sum Sq Mean Sq F value Pr(gtF)
  • Diet 2 0.240000 0.120000 3.6 0.1595
  • Residuals 3 0.100000 0.033333
  • Analysis without the Error function parallels the
    traditional analysis and that with Error is
    similar to our table.
  • In this course will use Error function.

62
Output (continued)
  • gt model.tables(Rat.aov, type"means")
  • Tables of means
  • Grand mean
  • 3.1
  • Diet
  • A B C
  • 3.1 3.3 2.7
  • rep 3.0 2.0 1.0

63
III.D Diagnostic checking
  • Assumed the following model
  • Y is distributed N(y, s2In)
  • where y EY Xq and
  • s2 is the variance of an observation
  • Maximal model is used in diagnostic checking
  • yT EY XTa
  • For this model to be appropriate requires that
  • the response is operating additively that the
    individual deviations in the response to a
    treatment are similar
  • the sets of units assigned the treatments are
    comparable in that the amount of uncontrolled
    variation exhibited by them is the same for each
    treatment
  • each observation is independent of other
    observations and
  • that the response of the units is normally
    distributed.

64
Example III.2 Caffeine effects on students
  • Effect of orally ingested caffeine on a physical
    task was investigated (Draper and Smith, 1981,
    sec.9.1).
  • Thirty healthy male college students were
    selected and trained in finger tapping.
  • Ten men were randomly assigned to receive one of
    three doses of caffeine (0, 100 or 200 mg).
  • The number of finger taps after ingesting the
    caffeine was recorded for each student.

65
Entering the data
  • Setting up a data frame, from scratch, for data
    arranged in standard order (see App C)
  • factor Students with values 130,
  • factor Dose with levels 0, 100 and 200 and values
    depending on whether data is entered by rows or
    columns (can use rep function),
  • numeric vector Taps with the 30 observed values
    of the response variable.

66
Entering the data (cont'd)
gt CRDCaff.dat Students Dose Taps 1 1
0 242 2 2 100 248 3 3 200
246 4 4 0 245 5 5 100 246 6
6 200 248 7 7 0 244 8
8 100 245 9 9 200 250 10 10
0 248 11 11 100 247 12 12 200
252 13 13 0 247 14 14 100
248 15 15 200 248 16 16 0
248 17 17 100 250 18 18 200
250 19 19 0 242 20 20 100
247 21 21 200 246 22 22 0
244 23 23 100 246 24 24 200
248 25 25 0 246 26 26 100
243 27 27 200 245 28 28 0
242 29 29 100 244 30 30 200 250
  • set up data.frame with factors Students and Dose
    and then add response variable Taps
  • CRDCaff.dat lt- data.frame(Students
    factor(130),
  • Dose factor(rep(c(0,100,200), times10)))
  • CRDCaff.datTaps lt-
  • c(242,248,246,245,246,248,244,245,250,248,247,252
    ,247,248,248,
  • 248,250,250,242,247,246,244,246,248,246,243,245,2
    42,244,250)
  • CRDCaff.dat

67
Boxplots for each level of Dose
  • Use function
  • boxplot(split(Taps, Dose), xlab"Dose",
    ylab"Number of taps")
  • Average number of taps increasing as dose
    increases
  • Some evidence dose 0 more variable than dose 100.

68
Analysis of variance for this data
  • gt Caffeine.aov lt- aov(Taps Dose
    Error(Students), CRDCaff.dat)
  • gt summary(Caffeine.aov)
  • Error Students
  • Df Sum Sq Mean Sq F value Pr(gtF)
  • Dose 2 61.400 30.700 6.1812 0.006163
  • Residuals 27 134.100 4.967

gt model.tables(Caffeine.aov, type"means") Tables
of means Grand mean 246.5 Dose Dose
0 100 200 244.8 246.4 248.3
69
The hypothesis test
  • Step 1 Set up hypotheses
  • H0 a0 a100 a200 m (yG XGm)
  • H1 not all population Dose means are equal
  • (yD XDa)
  • Set a 0.05

70
The hypothesis test (continued)
  • Step 2 Calculate test statistic
  •   The ANOVA table for the example is

Step 3 Decide between hypotheses P(F2,27 ?
6.18) p 0.006 lt a 0.05.   The evidence
suggests there is a dose difference and that the
expectation model that best describes the data is
yD ? XDa.
71
Examination of the residuals, eT
  • Use the Residuals-versus-fitted-values plot and
    the Normal Probability plot.
  • In interpreting these plots
  • Note ANOVA is robust to variance heterogeneity,
    if treatments equally replicated, and to moderate
    departures from normality.
  • Most commonly find an unusually large or small
    residual.
  • The cause of such extreme values requires
    investigation.
  • May be a recording mistake or catastrophe for a
    unit that can be identified.
  • But, may be valid and is the result of some
    unanticipated, but important effect.

72
The Normal Probability plot
  • Should show a broadly straight line trend.




__________________
73
The Residuals-versus-fitted-values plot
  • Generally, the points on scatter diagram should
    be spread across plot evenly.

 


_____________________________   no particular
pattern
74
Problem plots




__________________________   varianc
e increases as level increases
 



_________________________   systematic trend in
residuals


_________________________   variance peaks
in middle
Actually, for the CRD, this plot has a vertical
scatter of points for each treatment each
should be centred on zero and of the same width.
75
R functions used in producing these plots
  • resid.errors extract the residuals from an aov
    object when Error function used
  • fitted.errors extract the fitted values from an
    aov object when Error function used
  • plot to plot the fitted values against the
    residuals
  • qqnorm to plot the residuals against the normal
    quantiles
  • qqline to add a line to the plot produced by
    qqnorm.
  • First 2 functions are nonstandard functions from
    dae package.

76
Example III.2 Caffeine effects on students
(continued)
  • A violation of the assumptions would occur if all
    the students were in the same room and the
    presence of other students caused anxiety to just
    the students that had no caffeine.
  • The response of the students is not independent.
  • It may be that the inhibition of this group
    resulted in less variation in their response
    which would be manifest in the plot.
  • Another situation that would lead to an
    unacceptable pattern in the plot is if the effect
    becomes more variable as the level of the
    response variable increases.
  • For example, caffeine increases the tapping but
    at higher levels the variability of increase from
    student to student is greater.
  • That is there is a lack of additivity in the
    response.

77
R output for getting the plots
gt res lt- resid.errors(Caffeine.aov) gt fit lt-
fitted.errors(Caffeine.aov) gt data.frame(Students,
Dose,Taps,res,fit) Students Dose Taps res
fit 1 1 0 242 -2.8 244.8 2 2
100 248 1.6 246.4 3 3 200 246 -2.3
248.3 4 4 0 245 0.2 244.8 5
5 100 246 -0.4 246.4 6 6 200 248 -0.3
248.3 7 7 0 244 -0.8 244.8 8
8 100 245 -1.4 246.4 9 9 200 250 1.7
248.3 10 10 0 248 3.2 244.8 11
11 100 247 0.6 246.4 12 12 200 252
3.7 248.3 13 13 0 247 2.2 244.8 14
14 100 248 1.6 246.4 15 15 200 248
-0.3 248.3 16 16 0 248 3.2 244.8 17
17 100 250 3.6 246.4 18 18 200 250
1.7 248.3 19 19 0 242 -2.8 244.8 20
20 100 247 0.6 246.4
  • Note the use of data.frame to produce a printed
    list of the original data with the residuals and
    fitted values.

78
Plots for the example
21 21 200 246 -2.3 248.3 22 22 0
244 -0.8 244.8 23 23 100 246 -0.4
246.4 24 24 200 248 -0.3 248.3 25
25 0 246 1.2 244.8 26 26 100 243
-3.4 246.4 27 27 200 245 -3.3 248.3 28
28 0 242 -2.8 244.8 29 29 100 244
-2.4 246.4 30 30 200 250 1.7 248.3 gt
plot(fit, res, pch 16) gt qqnorm(res, pch
16) gt qqline(res)
  • The Residuals-versus-fitted-values plot appears
    to be fine.

79
Normal Probability plot
  • Displaying some curvature at ends.
  • Indicates data heavier in tails and flatter in
    the peak than expected for a normal distribution.
  • Given normality not crucial and only a few
    observations involved, use analysis we have
    performed.

80
The hypothesis test Summary
  • The ANOVA table for the example is

P(F2,27 ? 6.18) p 0.006 lt a 0.05.   The
evidence suggests there is a dose difference and
that the expectation model that best describes
the data is yD ? XDa. Diagnostic checking
indicates model is OK
81
III.E Treatment differences
  • So far all that our analysis has accomplished is
    that we have decided whether or not there appears
    to be a difference between the population
    treatment means.
  • Of greater interest to the researcher is how the
    treatment means differ.
  • Two alternatives available
  • Multiple comparisons procedures
  • used when the treatment factors are all
    qualitative so that it is appropriate to test for
    differences between treatments
  • Fitting submodels
  • When one (or more) of the treatment factors is
    quantitative the fitting of smooth curves to the
    trend in the means is likely to lead to a more
    appropriate and concise description of the
    effects of the factors.
  • Often, for reasons explained in chapter II, a
    low order polynomial will provide an adequate
    description of the trend.

82
Note
  • Multiple comparison procedures should not be used
    when the test for treatment differences is not
    significant.
  • Submodels should be fitted irrespective of
    whether the overall test for treatment
    differences is significant.
  • The difference in usage has to do with one being
    concerned with mean differences and the other
    with deciding between models.

83
a) Multiple comparisons procedures for comparing
all treatments
  • Multiple comparisons for all treatments MCA
    procedures.
  • In general MCA procedures divide into those
    based
  • on family-wise error rates Type I error rate
    specified and controlled for over all
    comparisons, often at 0.05.
  • those based on comparison-wise error rates Type
    I error rate specified and controlled for each
    comparison
  • Problem with latter is probability of an
    incorrect conclusion gets very high as the number
    of comparisons increases.
  • For comparison-wise error rate of 0.05,
    family-wise error rate
  • So recommend use MCA procedure based on
    family-wise error rates use just Tukey's HSD
    procedure.

84
Tukeys Honestly Significant Difference procedure
  • determines if each pair of means is significantly
    different
  • is based on a family-wise error rate.
  • basically for equal numbers of observations for
    each mean.
  • A modification that is approximate will be
    provided for unequal numbers.

85
The procedure
  • Each application of the procedure is based on the
    hypotheses
  • Ho aA aB
  • H1 aA ? aB
  • One calculates the statistic

qt,n,a is the studentized range with t  no.
means, n  Residual df and a  significance level,
is the standard error of the
difference rA, rB are the number of replicates
for each of a pair of means being compared.
86
Notes about replication
  • Note that strictly speaking rA and rB should be
    equal.
  • When unequal rA and rB called the Tukey-Kramer
    procedure.
  • w(a) will depend on which means are being
    compared.
  • If the treatments are all equally replicated with
    replication r, the formula for reduces to

87
In R
  • aov has given Residual MSq and df
  • model.tables produces tables of means.
  • qtukey computes as follows
  • q lt- qtukey(1 - a, t, n)

88
Example III.2 Caffeine effects on students
(continued)
  • Have already concluded evidence suggests that
    there is a dose difference. But which doses are
    different?
  • Output with means and q
  • gt model.tables(Caffeine.aov, type "means")
  • Tables of means
  • Grand mean
  • 246.5
  • Dose
  • Dose
  • 0 100 200
  • 244.8 246.4 248.3
  • gt q lt- qtukey(0.95, 3, 27)
  • gt q
  • 1 3.506426

89
Decide on differences
  • Any two means different by 2.47 or more are
    significantly different.
  • Our conclusion is that the mean for 0 and 200 are
    different but that for 100 is somewhat
    intermediate.

90
b) Fitting submodels
  • For quantitative factors, like Dose, it is often
    better to examine the relationship between the
    response and the levels of the factor.
  • This is commonly done using polynomials.
  • Now, a polynomial of degree t-1 will fit exactly
    t points.
  • So a quadratic will fit exactly the three means.
  • In practice, polynomials of order 2 often
    sufficient.
  • However, more than 3 points may be desirable so
    that deviations from the fitted curve or lack of
    fit can be tested

91
Polynomial models
  • To investigate polynomial models up to order 2,
    following models for expectation investigated

where xk is the value of the kth level of the
treatment factor, m is the intercept of the
fitted equation and g1 is the slope of the
fitted equation and g2 is the quadratic
coefficient of the fitted equation.
The X1 and X2 matrices made up of columns that
consist of the values of the levels of the factor
and their powers not the indicator variables of
before.
92
Example III.2 Caffeine effects on students
(continued)
  • The X matrices for the example are
  • The columns of each X matrix in the above list
    are a linear combination of those of any of the X
    matrices to its right in the list.

93
Marginality of models
  • That is, the columns of each X matrix in the
    above list are a linear combination of those of
    any of the X matrices to its right in the list.
  • Marginality is not a symmetric relationship in
    that
  • if a model is marginal to second model,
  • the second model is not necessarily marginal to
    the first.
  • For example, EY X1g1 is marginal to EY
    X2g2 but not vice-a-versa, except when t  2.

94
Equivalence of EY X2g2 and EY XDa
  • C(X2) C(XD) as the 3 columns of one matrix can
    be written as 3 linearly independent combinations
    of the columns of the other matrix.
  • So the two models are marginal to each other and
    are equivalent
  • However, while the fitted values are the same,
    the estimates and interpretation of the
    parameters are different
  • those corresponding to X2 are interpreted as the
    intercept, slope and curvature coefficient.
  • those corresponding to XD are interpreted as the
    expected (mean) value for that treatment.
  • Also, in spite of being marginal, the estimators
    of the same parameter differ depending on the
    model that has been fitted.
  • for example, for the model EY XGm,
  • but for EY X1g1,

models not orthogonal
95
Hypothesis test incorporating submodels
  • Test statistics computed in ANOVA table.
  • Step 1 Set up hypotheses
  •   a) H0
  • H1
  •   (Differences between fitted model or
    Deviations from quadratic are zero)
  •  
  • b) H0
  • H1  
  • c) H0
  • H1  
  • Set a.

96
Hypothesis test (continued)
  • Step 2 Calculate test statistics
  • The ANOVA table for a CRD is

Test statistics corresponds to hypothesis pairs
in Step 1.
97
Hypothesis test (continued)
  • Step 3 Decide between hypotheses
  • Begin with the first hypothesis pair, determine
    its significance and continue down the sequence
    until a significant result is obtained.
  • A significant Deviations F indicates that the
    linear and quadratic terms provide an inadequate
    description.
  • Thus a model based on them would be
    unsatisfactory. No point in continuing.
  • If the Deviations F is not significant , then a
    significant Quadratic F indicates that a 2nd
    degree polynomial is required to adequately
    describe the trend.
  • As a linear coefficient is necessarily
    incorporated in a 2nd polynomial, no point to
    further testing in this case.
  • If both the Deviations and Quadratic F's are not
    significant, then a significant Linear F
    indicates a linear relationship describes the
    trend in the treatment means.

98
What if deviations is significant?
  • If the Deviations are significant, then two
    options are
  • increase the degree of the polynomial (often not
    desirable), or
  • use multiple comparisons to investigate the
    differences between the means.

99
Fitting polynomials in R
  • Need to realize that the default contrasts for
    ordered (factor) objects are polynomial
    contrasts, assuming equally-spaced levels
  • linear transformations of all columns of X,
    except the first, such that each column
  • is orthogonal to all other columns in X and
  • has SSq equal to 1.
  • Facilitates the computations.
  • So, if a factor is quantitative
  • set it up as an ordered object from the start.
  • if did not make it an ordered object, then
    redefine factor as an ordered.

100
Example III.2 Caffeine effects on students
(continued)
  • R output for setting up ordered (factor)
  • gt fit polynomials
  • gt
  • gt Dose.lev lt- c(0,100,200)
  • gt CRDCaff.datDose lt- ordered(CRDCaff.datDose,

  • levelsDose.lev)
  • gt contrasts(CRDCaff.datDose) lt- contr.poly(t,

  • scoresDose.lev)
  • gt contrasts(CRDCaff.datDose)
  • .L .Q
  • 0 -7.071068e-01 0.4082483
  • 100 -9.073264e-17 -0.8164966
  • 200 7.071068e-01 0.4082483

101
R output for fitting polynomial submodels
  • gt Caffeine.aov lt- aov(Taps Dose
    Error(Students), CRDCaff.dat)
  • gt summary(Caffeine.aov, split
    list(Dose list(L 1, Q 2)))
  • Error Students
  • Df Sum Sq Mean Sq F value Pr(gtF)
  • Dose 2 61.400 30.700 6.1812 0.006163
  • Dose L 1 61.250 61.250 12.3322 0.001585
  • Dose Q 1 0.150 0.150 0.0302 0.863331
  • Residuals 27 134.100 4.967
  • Note 3 treatments will follow a quadratic exactly
    this is reflected in C(X2) C(XD).
  • Thus, Deviations line is redundant in this
    example.
  • When required, Deviations line is computed by
    assigning extra powers to a single line named,
    say, Dev in split function (e.g. Dev 36 see
    notes).

102
Hypothesis test incorporating submodels
  • Step 1 Set up hypotheses
  •  
  • a) H0
  • H1  
  • b) H0
  • H1  
  • Set a 0.05.

103
Hypothesis test (continued)
  • Step 2 Calculate test statistics
  • gt Caffeine.aov lt- aov(Taps Dose
    Error(Students), CRDCaff.dat)
  • gt summary(Caffeine.aov, split
    list(Dose list(L 1, Q 2)))
  • Error Students
  • Df Sum Sq Mean Sq F value Pr(gtF)
  • Dose 2 61.400 30.700 6.1812 0.006163
  • Dose L 1 61.250 61.250 12.3322 0.001585
  • Dose Q 1 0.150 0.150 0.0302 0.863331
  • Residuals 27 134.100 4.967

104
Hypothesis test (continued)
  • Step 3 Decide between hypotheses
  •   The Quadratic source has a probability of 0.863
    gt 0.05 and so H0 is not rejected in this case.
  •   The linear source has a probability of 0.002 lt
    0.05 and so H0 is rejected in this case.
  •   It is clear quadratic term is not significant
    but linear term is highly significant so the
    appropriate model for the expectation is the
    linear model y X1g1 where g1 m g1.

105
Fitted equation
  • Coefficients can be obtained using the coef
    function on the aov object, but these are not
    suitable for obtaining the fitted values.
  • The fitted equation is obtained by putting the
    values of the levels into a numeric vector and
    using the lm function to fit a polynomial of the
    order indicated by the hypothesis test.
  • For the example, linear equation was adequate and
    so the analysis is redone with 1 for the order of
    the polynomial.

106
Fitted equation for the example
  • gt D lt- as.vector(Dose)
  • gt D lt- as.numeric(D)
  • gt Caffeine.lm lt- lm(Taps D)
  • gt coef(Caffeine.lm)
  • (Intercept) D
  • 244.7500 0.0175
  • The fitted equation is Y 244.75 0.0175 X
    where X is the number of taps
  • The slope of this equation is 0.0175.
  • That is, taps increase 0.0175 x 100 1.75 with
    each 100 mg of caffeine.
  • This conclusion seems a more satisfactory summary
    of the results than that the response at 200 is
    significantly greater than at 0 with 100 being
    intermediate.
  • The commands to fit a quadratic would be
  • D2 lt- DD
  • Caffeine.lm lt- lm(Taps D D2)

107
Plotting means and fitted line
  • Details are in the notes
  • The plot produced is as follows

108
c) Comparison of treatment parametrizations
  • Two alternative bases
  • Lead to different parameter estimates with
    different interpretations and different
    partitions of the treatment SSqs.
  • The total treatment SSqs and fitted values for
    treatments remain the same, while the contrasts
    span the treatment space.
  • That is, in this case, SSqT SSqL SSqQ

109
III.G Exercises
  • Ex. III.12 look at aspects of quadratic forms
  • Ex. III.3 investigates the calculations with
    example that can be done with a calculator
  • Ex. III.4 involves producing a randomized layout
  • Ex. III.5 asks for the complete analysis of a CRD
    with a qualitative treatment factor
  • Ex. III.6 asks for the complete analysis of a CRD
    with a quantitative treatment factor
Write a Comment
User Comments (0)
About PowerShow.com