Statistical significance
In hypothesis testing, statistical significance is the main criterion for forming conclusions. You compare your p value to a set significance level (usually 0.05) to decide whether your results are statistically significant or non-significant. |
If a result is statistically significant, that means it’s unlikely to be explained solely by chance or random factors. In other words, a statistically significant result has a very low chance of occurring if there were no true effect in a research study. |
The p value, or probability value, tells you the statistical significance of a finding. In most studies, a p value of 0.05 or less is considered statistically significant, but this threshold can also be set higher or lower. |
The significance level, or alpha (α), is a value that the researcher sets in advance as the threshold for statistical significance. It is the maximum risk of making a false positive conclusion (Type I error) that you are willing to accept. |
In a hypothesis test, the p value is compared to the significance level to decide whether to reject the null hypothesis. |
|
• If the p value is higher than the significance level, the null hypothesis is not refuted, and the results are not statistically significant. |
|
• If the p value is lower than the significance level, the results are interpreted as refuting the null hypothesis and reported as statistically significant. |
Usually, the significance level is set to 0.05 or 5%. That means your results must have a 5% or lower chance of occurring under the null hypothesis to be considered statistically significant. |
The significance level can be lowered for a more conservative test. That means an effect has to be larger to be considered statistically significant. |
The significance level may also be set higher for significance testing in non-academic marketing or business contexts. This makes the study less rigorous and increases the probability of finding a statistically significant result. |
|
|
Effect Size
indicates the practical significance of your results. It’s important to report effect sizes along with your inferential statistics for a complete picture of your results. You should also report interval estimates of effect sizes if you’re writing an APA style paper.
Effect size tells you how meaningful the relationship between variables or the difference between groups is. It indicates the practical significance of a research outcome.
A large effect size means that a research finding has practical significance, while a small effect size indicates limited practical applications. |
Frequentist vs. Bayesian statistics
Frequentist statistics |
emphasizes null hypothesis significance testing and always starts with the assumption of a true null hypothesis. |
Bayesian statistics |
In this approach, you use previous research to continually update your hypotheses based on your expectations and observations. |
|
Bayes factor compares the relative strength of evidence for the null versus the alternative hypothesis rather than making a conclusion about rejecting the null hypothesis or not. |
|
|
Decision error
Type I and Type II errors are mistakes made in research conclusions.
A Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s false.
The probability of making a Type I error is the significance level, or alpha (α), while the probability of making a Type II error is beta (β).
These risks can be minimized through careful planning in your study design. |
|