Minimum Detectable Effect Calculator

Enter your baseline conversion rate, number of visitors per variation, statistical significance, and statistical power to find the Minimum Detectable Effect (MDE) — the smallest relative change your experiment can reliably detect. You'll also see the absolute MDE and the detectable conversion rate so you know exactly what uplift your A/B test is powered to catch.

%

The current conversion rate of your control group (e.g. 10 means 10%).

The number of visitors (or sessions) assigned to each variation in your test.

The confidence level at which a result is declared statistically significant. 95% is the industry standard.

The probability of detecting a true effect when it exists. 80% is the standard minimum.

Relative MDE is a % change from baseline (e.g. +10% of 10% = 11%). Absolute MDE is a direct percentage point change.

Results

Minimum Detectable Effect

--

Absolute MDE (percentage points)

--

Detectable Conversion Rate

--

Significance Level (α)

--

Type II Error Rate (β)

--

Baseline vs Detectable Conversion Rate

Frequently Asked Questions

What is the Minimum Detectable Effect (MDE)?

The Minimum Detectable Effect is the smallest true difference in conversion rates that your A/B test has sufficient statistical power to reliably detect. If the real effect is smaller than the MDE, your test is likely to miss it (a false negative). Setting a smaller MDE requires a larger sample size.

What is the difference between relative and absolute MDE?

Relative MDE expresses the detectable change as a percentage of the baseline — for example, a 10% relative MDE on a 10% baseline means detecting a change to 11%. Absolute MDE is the raw percentage-point change, so a 1 pp absolute MDE on the same baseline also means detecting a shift to 11%. Both convey the same threshold; the framing differs.

How do I choose a good value for MDE?

Start by asking what change would be meaningful or actionable for your business. If a 5% relative improvement in conversions would justify rolling out a change, set your MDE to 5%. Smaller MDEs demand more traffic and longer test durations, so balance statistical rigor with practical constraints.

What statistical significance level should I use?

95% (α = 0.05) is the widely accepted industry standard and means you accept a 5% chance of a false positive. Higher significance (99%) reduces false positives but requires larger samples. Lower levels (90%) are used when speed matters more and the cost of a false positive is low.

What is statistical power and why does it matter?

Statistical power (1 − β) is the probability that your test will correctly detect a true effect when one exists. A power of 80% means there is a 20% chance of missing a real difference (Type II error). Higher power is safer but requires more visitors. 80% is the conventional minimum for A/B tests.

How many visitors do I need for my A/B test?

The required sample size depends on your baseline conversion rate, the MDE you want to detect, your significance level, and your desired power. Generally, lower baseline rates or smaller MDEs require more visitors. Use this calculator in reverse — set a target MDE based on your available traffic and see what effect you can realistically detect.

How long should I run my A/B test?

Run your test until each variation reaches the required number of visitors per variation. Stopping early — even if results look significant — inflates false positive rates. As a rule of thumb, run for at least one full business cycle (typically 1–2 weeks) to account for day-of-week effects.

Why is my MDE large even with thousands of visitors?

A high MDE with a large sample often results from a very low baseline conversion rate. When conversions are rare, variance is high and you need more data to distinguish signal from noise. Try increasing traffic to each variation or, where possible, choose a higher-converting funnel step as your primary metric.

More Statistics Tools