For the condition of conducting ANOVA (Analysis of variance), there must be following fundamental assumptions that need to be secured:
- Homogeneity of variance
If the evidence indicates that the assumptions for an analysis of variance or a t-test cannot be maintained, two courses of action are open to us. One is carrying out transformation of the variable to be analysed in a such a manner that the resulting transformed variates meet the assumption, such as logarithmic transformation, square root transformation, Box-Cox transformation, and Arcsine transformation etc… and the other is carrying out a different test, called as non-parametric method, not requiring the rejected assumptions, such as the distribution-free test in lieu of anova.
Whilst Mann-Whitney U-tests are based on rank and measure difference in location. Klomogorov-Smirnov two sample test tests differences between two distributions. Its null hypothesis is that the two samples are distributed identically, thus the test is sensitive to differences in location, dispersion, skewness, and so forth.
It is based on the unsigned (absolute) difference between two cumulative frequency distribution of two samples. We locate the largest unsigned difference, call it as "D" and then it is multiplied by the product of two sample sizes (Dn1n2). This score (Dn1n2) then is compared with its critical value in Kolmogorov–Smirnov Tables. Since the test does not compare any particular parameter (i.e. mean or median), it does not report any confidence interval. For the interpretation of the p-value, If the P value is small, conclude that the two groups were sampled from populations with different distributions.
Kolmogorov-smirnov test is less powerful than the Mann-Whitney U-test with respect to the difference in location. However Kolmogorov-smirnov test examines differences in both shape and location of distributions and is thus a more comprehensive test.
It is easy to confuse the two sample Kolmogorov-Smirnov test (which compares two groups) with the one sample Kolmogorov-Smirnov test, also called the Kolmogorov-Smirnov goodness-of-fit test, which tests whether one distribution differs substantially from a theoretical distribution. The one sample test is most often used as a normality test to compare the distribution of data in a single data set with the predictions of a normal distribution.