A one-sample test, also known as a goodness of fit test, shows whether the collected data is useful in making a prediction about the population.
One sample tests are used when we have a single sample and wish to test the hypothesis that it comes from a specified population.
In this case, the following questions are encountered:
Is there a difference between observed frequencies and the frequencies we would expect, based on some theory? •
Is there a difference between observed and expected proportions? •
Is it reasonable to conclude that sample is drawn from a population with some specified distribution (normal, etc.). •
Is there significant difference between some measures of central tendency (X bar) and its population parameter (μ).
A number of tests may be appropriate in this situation, i.e. Parametric test and Non-parametric test.
Parametric tests are more powerful because their data are derived from interval and ratio measurements. Assumptions for parametric tests include the following:
• The observations must be independent.
• The observations should be drawn from normally distributed populations. • These populations should have equal variances.
• The measurement scales should be at least interval so that arithmetic operations can be used with them.
Parametric tests place different emphasis on the importance of assumptions. Some tests are quite robust and hold up well despite violations. For others, a departure from linearity or equality of variance may threaten the validity of the results. Assessing the consequences of violating a statistical assumption requires a lot of tacit knowledge with regard to the data used and the field one investigates. As outlined above, violations of the assumptions are the rule rather than the exception in business research. Therefore, interpretation of the results should never be based blindly on the statistical results. Rather the statistical results form a solid base for discussing how they can be explained and interpreted.
Non-parametric tests have fewer and less stringent assumptions. They do not specify normally distributed populations or homogeneity of variance. Some tests require independence of cases; others are expressly designed for situations with related cases. Non-parametric tests are the only ones usable with nominal data; they are the only technically correct tests to use with ordinal data, although parametric tests are sometimes employed in this case. Non-parametric tests may also be used for interval and ratio data, although they waste some of the information available. Non-parametric tests are also easy to understand and use. Parametric tests have greater e ciency when their use is appropriate, but even in such cases non-parametric tests o en achieve an e ciency as high as 95 per cent. is means that the non-parametric test with a sample of 100 will provide the same statistical testing power as a parametric test with a sample of 95.
Parametric tests in one- sample test
One-sample tests compare a population mean (µ) with a single sample mean( X).
The Z or t-test is used to determine the statistical significance between a sample distribution mean and a parameter. The Z distribution and t distribution differ. The t has more tail area than that found in the normal distribution. This is compensation for the lack of information about the population standard deviation.
The One-Sample z-Test
When the population standard deviation (σ) is known, we use the equation on the left below. When sigma (σ) is not known (which is usually the case), we use s to estimate σ, and so use the equation on the right (the more popular of the two).
The limitation to using s to estimate σ is whether the sample is large enough to approximate a normal curve. "Large enough" means at least n=30 subjects. The normal curve table (z-) requires a normal distribution of scores in order to give accurate proportions under...
Please join StudyMode to read the full document