This website is using cookies to ensure you get the best experience possible on our website.
More info: Privacy & Cookies, Imprint
Statistical tests are divided into parametric and non-parametric tests. The main difference lies in the assumptions made about the underlying distribution of the data.
Parametric tests assume that the data follow a particular distribution, such as the normal distribution. These tests typically use parameters such as the mean and standard deviation to test hypotheses about population parameters. Examples of parametric tests include the t-test, analysis of variance (ANOVA), and linear regression. Parametric tests tend to be more powerful when assumptions are met, but they require that the data follow a specific distribution.
Nonparametric tests, on the other hand, make no assumptions about the underlying distribution of the data. They are also known as distribution-free tests. These tests are based on rankings or permutations of the data and are well suited for data where distributional assumptions are not met or when the data are categorical or ordinal. Examples of nonparametric tests include the Wilcoxon rank-sum test, the Mann-Whitney U test, and the Kruskal-Wallis test.
The choice of a parametric or non-parametric test is based on the nature of the data and whether the assumptions are met. If the assumptions are met and the data follow a particular distribution, parametric tests are more powerful. If the distributional assumptions are not met or the data are categorical or ordinal, nonparametric tests are more appropriate.