SUBMITTED BY :
NON-PARAMETRIC TESTS AND ITS APPLICATION IN MANAGEMENT
Everything that we have done up until now in statistics has relied heavily on one major fact: that our data is normally distributed. Wehave beenable to make inferences about population means (one-sample, two-sample z and t tests and analysis of variance), but in each case weassumed that our population was normal. What happens when we want to performa test on our data, but we have no idea what its true distribution is, and therefore can’t assumethat our data are normally distributed? In this case, we use what are called nonparametric tests. These tests do not require any specific form for the distribution of the population. s non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust. Another justification for the use of non-parametric methods is simplicity. In certain cases, even when the use of parametric methods is justified, non-parametric methods may be easier to use. Due both to this simplicity and to their greater robustness, non-parametric methods are seen by some statisticians as leaving less room for improper use and misunderstanding. The wider applicability and increased robustness of non-parametric tests comes at a cost: in cases where a parametric test would be appropriate, non-parametric tests have less power. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence. Non-parametric or distribution-free inferential statistical methods are mathematical procedures for statistical hypothesis testing which, unlike parametric statistics, make no assumptions about the probability distributions of the variables being assessed.When our data is normally distributed, the mean isequal to the median and we use the mean as our measure ofcenter.However, if our data is skewed, then the median is a much better measure of center. Therefore, justlike the Z, t and F tests made inferences about the population mean(s),nonparametric tests make inferences about the population median(s).
Given below are the various nonparametric tests:
* Chi square(χ 2)
* Kolmogorov -Smirnov test
* median test
* Kruskal-Wallis one-way analysis of variance by ranks
* Friedman two-way analysis of variance by ranks
* Kuiper's test
* Mann-Whitney U
* Wilcoxon signed-rank test
* Wilcoxon matched-pairs test
* Wald- Wolfowitz runs test
The details of some of the commonly used nonparametric is given below:
The Sign test (for 2 repeated/correlated measures)
The sign test is one of the simplest nonparametric tests. It is for use with 2 repeated (or correlated) measures (see the example below), and measurement is assumed to be at least ordinal. For each subject, subtract the 2nd score from the 1st, and write down the sign of the difference. (That is write “-” if the difference score is negative, and “+” if it is positive.) The usual null hypothesis for this test is that there is no difference between the two treatments. If this is so, then the number of + signs (or - signs, for that matter) should have a binomial distribution1 with p = .5, and N = the number of subjects. In other words, the sign test is just a binomial test with + and - in place of Head and Tail (or Success and Failure).
Large sample sign test
The sampling distribution used in carrying out the sign test is a binomial distribution with p =q = .5. The mean of a binomial distribution is equal to Np, and the variance is equal to Npq. As N increases,...