Bayesian A/B Test Calculator

Enter your Control and Variant sample sizes and conversions to get a Bayesian probability that each version is the winner. You'll see Chance to Beat for both groups, their conversion rates, and the relative uplift — giving you a clear, intuitive read on your A/B test results without needing p-values.

Total number of users or sessions exposed to Control variant A.

Number of clicks, sign-ups, or goal completions for variant A.

Total number of users or sessions exposed to Variant B.

Number of clicks, sign-ups, or goal completions for variant B.

Prior Beta distribution α parameter. Use 1 for a non-informative prior.

Prior Beta distribution β parameter. Use 1 for a non-informative prior.

Variants exceeding this chance-to-beat threshold are declared the winner.

Results

Variant B — Chance to Beat Control

--

Control A — Chance to Beat Variant

--

Control A — Conversion Rate

--

Variant B — Conversion Rate

--

Relative Uplift (B vs A)

--

Winner

--

Chance to Beat — A vs B

Frequently Asked Questions

What is a Bayesian A/B test?

A Bayesian A/B test compares two variants by calculating the probability that one outperforms the other, given the observed data and a prior belief. Unlike frequentist methods, it expresses results as intuitive probabilities (e.g. '85% chance Variant B is better') rather than p-values, making it easier for non-statisticians to interpret and act on.

What does 'Chance to Beat' mean?

'Chance to Beat' is the Bayesian probability that one variant has a higher true conversion rate than the other. A Variant B 'Chance to Beat' of 95% means there is a 95% probability that B's underlying conversion rate is genuinely higher than A's, based on your data.

What should my samples and conversions represent?

Samples are the total number of users, sessions, or impressions exposed to each variant. Conversions are the number of successful outcomes — clicks, sign-ups, purchases, or any goal completion — within that group. Conversions must always be less than or equal to samples.

What prior alpha and beta values should I use?

For most A/B tests with no strong prior knowledge, use α = 1 and β = 1, which represents a flat (non-informative) prior — meaning you start with no assumption about the conversion rate. If you have historical data suggesting a certain baseline rate, you can encode that belief by adjusting α and β accordingly.

What winning threshold should I choose?

A 95% threshold is the most commonly used standard, meaning you declare a winner only when there is a 95% probability it is genuinely better. For lower-stakes tests or early exploration, 90% or even 85% may be acceptable. For high-stakes decisions like pricing or checkout flows, consider 99%.

How is Bayesian A/B testing different from frequentist testing?

Frequentist testing relies on p-values and requires a fixed sample size determined before the test begins. Bayesian testing lets you interpret results at any point during the test in probabilistic terms. Bayesian results are often considered more intuitive because they directly answer 'what is the probability B is better than A?' rather than asking 'is the result unlikely under the null hypothesis?'

How many samples do I need for a reliable result?

There is no hard minimum, but as a rule of thumb, aim for at least a few hundred conversions per variant to get stable posterior estimates. Very small sample sizes (fewer than 10–20 conversions) will produce high uncertainty regardless of the apparent conversion rate difference. Run your test until the chance-to-beat stabilises above your chosen threshold.

Can I use this calculator for more than two variants?

This calculator is designed for a two-variant (A vs B) comparison. For multi-variant (A/B/n) tests with three or more variants, the calculation becomes more complex. In that case, you can run pairwise comparisons between each variant and your control as a practical approximation.

More Statistics Tools