Bonferroni Correction Calculator

Enter your original significance level (α) and the number of comparisons you're performing, and this Bonferroni Correction Calculator computes your adjusted p-value threshold — the corrected α you should use for each individual test. You also get the Šidák correction as an alternative, plus a breakdown showing how stringent your new threshold becomes as comparisons increase.

The familywise error rate you want to maintain (commonly 0.05 or 0.01).

The total number of simultaneous statistical tests being performed.

Results

Bonferroni Corrected α

--

Šidák Corrected α

--

Original α Level

--

Number of Comparisons

--

Threshold Reduction

--

Original α vs Corrected Thresholds

Results Table

Frequently Asked Questions

What is the Bonferroni correction?

The Bonferroni correction is a statistical adjustment applied when you conduct multiple hypothesis tests simultaneously. Without correction, the probability of making at least one false positive (Type I error) rises sharply with each additional test. The correction keeps the overall familywise error rate at your chosen α by dividing it by the number of comparisons performed.

How do I calculate the Bonferroni correction?

The formula is simple: divide your original significance level (α) by the number of comparisons (k). For example, if α = 0.05 and you run 10 tests, the Bonferroni threshold is 0.05 / 10 = 0.005. A test result is only considered statistically significant if its p-value falls below this adjusted threshold.

What is the Bonferroni correction for 10 tests with α = 0.05?

For 10 simultaneous tests at α = 0.05, the Bonferroni corrected threshold is 0.05 / 10 = 0.005. This means each individual test must produce a p-value below 0.005 to be declared statistically significant, maintaining the overall familywise error rate at 5%.

When should I use the Bonferroni correction?

Use the Bonferroni correction when you are performing multiple independent or related hypothesis tests on the same dataset and want to control the familywise error rate. It is commonly applied in clinical trials, genomics studies, ANOVA post-hoc comparisons, and any analysis where many tests run simultaneously. It is most appropriate when the number of comparisons is moderate and the tests are independent.

What is the difference between the Bonferroni and Šidák corrections?

Both corrections address multiple comparisons, but they use different formulas. The Bonferroni correction is α / k, while the Šidák correction is 1 − (1 − α)^(1/k). The Šidák correction is slightly less conservative and is mathematically exact when tests are independent, whereas Bonferroni provides a conservative upper bound. For small numbers of comparisons, the results are nearly identical.

Is the Bonferroni correction too conservative?

Yes, the Bonferroni correction is widely considered conservative, especially when tests are correlated or when the number of comparisons is very large. By requiring an extremely small p-value threshold, it increases the risk of Type II errors (false negatives), meaning real effects may go undetected. Alternatives like the Benjamini-Hochberg procedure (controlling the false discovery rate) are often preferred in large-scale studies such as genome-wide association studies.

Is the Bonferroni correction the only method for multiple comparisons?

No. Several other methods exist, including the Šidák correction, Holm-Bonferroni step-down procedure, Benjamini-Hochberg false discovery rate (FDR) correction, and Tukey's HSD for pairwise comparisons. The best method depends on whether your tests are independent, how many comparisons you make, and whether you are controlling the familywise error rate or the false discovery rate.

Does the Bonferroni correction apply to dependent tests?

The Bonferroni correction is technically derived under the assumption of independent tests, but it remains a valid (conservative) correction even when tests are correlated or dependent. Because it is an inequality-based bound, it will over-correct when tests are positively correlated, making it more conservative than necessary in such cases.

More Statistics Tools