Analysis of Variance

Only available on StudyMode
  • Download(s) : 780
  • Published : October 23, 2012
Open Document
Text Preview
Analysis of Variance
Lecture 11 April 26th, 2011
A. Introduction
When you have more than two groups, a t-test (or the nonparametric equivalent) is no longer applicable. Instead, we use a technique called analysis of variance. This chapter covers analysis of variance designs with one or more independent variables, as well as more advanced topics such as interpreting significant interactions, and unbalanced designs.

B. One-Way Analysis of Variance
The method used today for comparisons of three or more groups is called analysis of variance (ANOVA). This method has the advantage of testing whether there are any differences between the groups with a single probability associated with the test. The hypothesis tested is that all groups have the same mean. Before we present an example, notice that there are several assumptions that should be met before an analysis of variance is used. Essentially, we must have independence between groups (unless a repeated measures design is used); the sampling distributions of sample means must be normally distributed; and the groups should come from populations with equal variances (called homogeneity of variance). Example:

15 Subjects in three treatment groups X,Y and Z.

X Y Z
700 480 500
850 460 550
820 500 480
640 570 600
920 580 610

The null hypothesis is that the mean(X)=mean(Y)=mean(Z). The alternative hypothesis is that the means are not all equal. How do we know if the means obtained are different because of difference in the reading programs(X,Y,Z) or because of random sampling error? By chance, the five subjects we choose for group X might be faster readers than those chosen for groups Y and Z. We might now ask the question, “What causes scores to vary from the grand mean?” In this example, there are two possible sources of variation, the first source is the training method (X,Y or Z). The second source of variation is due to the fact that individuals are different. SUM OF SQUARES total;

SUM OF SQUARES between groups;
SUM OF SQUARES error (within groups);

F ratio = MEAN SQUARE between groups/MEAN SQUARE error
= (SS between groups/(k-1)) / (SS error/(N-k))
SAS codes:
DATA READING;
INPUT GROUP $ WORDS @@;
DATALINES;
X 700 X 850 X 820 X 640 X 920
Y 480 Y 460 Y 500 Y 570 Y 580
Z 500 Z 550 Z 480 Z 600 Z 610
;
PROC ANOVA DATA=READING;
TITLE 'ANALYSIS OF READING DATA';
CLASS GROUP;
MODEL WORDS=GROUP;
MEANS GROUP;
RUN;
The ANOVA Procedure
Dependent Variable: words

Sum of
Source DF Squares Mean Square F Value Pr > F Model 2 215613.3333 107806.6667 16.78 0.0003 Error 12 77080.0000 6423.3333 Corrected Total 14 292693.3333

Now that we know the reading methods are different, we want to know what the differences are. Is X better than Y or Z? Are the means of groups Y and Z so close that we cannot consider them different? In general , methods used to find group differences after the null hypothesis has been rejected are called post hoc, or multiple comparison test. These include Duncan’s multiple-range test, the Student-Newman-Keuls’ multiple-range test, least significant-difference test, Tukey’s studentized range test, Scheffe’s multiple-comparison procedure, and others. To request a post hoc test, place the SAS option name for the test you want, following a slash (/) on the MEANS statement. The SAS names for the post hoc tests previously listed are DUNCAN, SNK, LSD, TUKEY, AND SCHEFFE, respectively. For our example we have:

MEANS GROUP / DUNCAN;
Or MEANS GROUP / SCHEFFE ALPHA=.1
At the far left is a column labeled “Duncan Grouping.” Any groups that are not significantly different from one another will have the same letter in the Grouping column. The ANOVA Procedure...
tracking img