statistics and data analysis
Analysis of Variance
Shaun Burke, RHM Technology Ltd, High Wycombe, Buckinghamshire, UK. Statistical methods can be powerful tools for unlocking the information contained in analytical data. This second part in our statistics refresher series looks at one of the most frequently used of these tools: Analysis of Variance (ANOVA). In the previous paper we examined the initial steps in describing the structure of the data and explained a number of alternative significance tests (1). In particular, we showed that t-tests can be used to compare the results from two analytical methods or chemical processes. In this article, we will expand on the theme of significance testing by showing how ANOVA can be used to compare the results from more than two sets of data at the same time, and how it is particularly useful in analysing data from designed experiments.
With the advent of built-in spreadsheet functions and affordable dedicated statistical software packages, Analysis of Variance (ANOVA) has become relatively simple to carry out. This article will therefore concentrate on how to select the correct variant of the ANOVA method, the advantages of ANOVA, how to interpret the results and how to avoid some of the pitfalls. For those wanting more detailed theory than is given in the following section, several texts are available (2–5). A bit of ANOVA theory Whenever we make repeated measurements there is always some variation. Sometimes this variation (known as within-group variation) makes it difficult for analysts to see if there have been significant changes between different groups of replicates. For example, in Figure 1 (which shows the results from four replicate analyses by 12 analysts), we can see that the total variation is a combination of the spread of results within groups and the spread between the mean values (betweengroup variation). The statistic that measures the within and between-group variations in ANOVA is called the sum of squares and often appears in the output tables abbreviated as SS. It can be shown that the different sums of squares calculated in ANOVA are equivalent to variances (1). The
central tenet of ANOVA is that the total SS in an experiment can be divided into the components caused by random error, given by the within-group (or sample) SS, and the components resulting from differences between means. It is these latter components that are used to test for statistical significance using a simple F-test (1). Why not use multiple t-tests instead of ANOVA? Why should we use ANOVA in preference to carrying out a series of t-tests? I think this is best explained by using an example; suppose we want to compare the results from 12 analysts taking part in a training exercise. If we were to use t-tests, we would need to calculate 66 t-values. Not only is this a lot of work but the chance of reaching a wrong conclusion increases. The correct way to analyse this sort of data is to use one-way ANOVA. One-way ANOVA One-way ANOVA will answer the question: Is there a significant difference between the mean values (or levels), given that the means are calculated from a number of replicate observations? ‘Significant’ refers to the observed spread of means that would not normally arise from the chance variation within groups. We have already seen an example of this type of problem in
the form of the data contained in Figure 1, which shows the results from 12 different analysts analysing the same material. Using these data and a spreadsheet, the results obtained from carrying out one-way ANOVA are reported in Example 1. In this example, the ANOVA shows there are significant differences between analysts (Fvalue > Fcrit at the 95% confidence level). This result is obvious from a plot of the data (Figure 1) but in many situations a visual inspection of a plot will not give such a clear-cut result. Notice that the output also includes a ‘p-value’...