The standard deviation is a popular measure of variability. It is used both as a separate entity and as a part of other analyses, such as computing confidence intervals and in hypothesis testing. The standard deviation is the square root of the variance. The population standard devia¬tion is denoted by σ. Like the variance, the standard deviation utilizes the sum of the squared deviations about the mean (SSx). It is computed by averaging these squared deviations (SSX/N) and taking the square root of that average. One feature of the standard deviation that distin¬guishes it from a variance is that the standard deviation is expressed in the same units as the raw data, whereas the variance is expressed in those units squared. The meaning of standard deviation is more readily understood from its use. Although the standard deviation and the variance are closely related and can be computed from each other, differentiating between them is important, because both are widely used in statistics. What is a standard deviation? What does it do, and what does it mean? The most precise way to define standard deviation is by reciting the formula used to compute it. However, insight into the concept of standard deviation can be gleaned by viewing the manner in which it is applied. Two ways of applying the standard deviation are the empirical rule and Chebyshev’s theorem.
The empirical rule is an important rule of thumb that is used to state r/.v approximate per-centage of values that lie within a given number of standard deviations from the mean of a set of data if the data are normally distributed. The empirical rule is used only for three numbers of standard deviations: 1σ, 2σ, and 3σ.
The empirical rule applies only when data are known to be approximately normally distributed. What do researchers use when data are not normally distributed or when the-shape of the distribution is unknown? Chebyshev's theorem applies to all distribu¬tions...
Please join StudyMode to read the full document