Posterior Probability Calculator

Enter your prior probability, likelihood of evidence given hypothesis true, and likelihood of evidence given hypothesis false to compute the posterior probability using Bayes' Theorem. You get back the updated P(H|E) — the probability your hypothesis is true after observing the evidence — plus the marginal probability of the evidence.

The initial probability of the hypothesis being true before seeing the evidence. If unknown, use 0.5.

The probability of observing the evidence if the hypothesis IS true (e.g. test sensitivity).

The probability of observing the evidence if the hypothesis is NOT true (e.g. false positive rate).

Results

Posterior Probability P(H|E)

--

Posterior Probability (%)

--

Marginal Probability P(E)

--

Prior Probability (%)

--

Probability Change

--

Prior vs Posterior Probability

Frequently Asked Questions

What is Bayes' Theorem?

Bayes' Theorem is a mathematical formula that describes how to update the probability of a hypothesis based on new evidence. It combines your prior belief (prior probability) with how likely the evidence is under each scenario to produce an updated belief (posterior probability). The formula is: P(H|E) = P(E|H) × P(H) / P(E).

What is posterior probability?

Posterior probability is the updated probability of a hypothesis being true after taking new evidence into account. It is the result of applying Bayes' Theorem — it 'updates' your prior belief using the likelihood of the observed evidence. The higher the likelihood of the evidence under the hypothesis, the more the prior probability increases.

What is prior probability and how do I choose it?

Prior probability P(H) is your initial estimate of how likely a hypothesis is before seeing any evidence. It can come from historical data, expert knowledge, or base rates. For example, the prevalence of a disease in the population is a natural prior for a medical test. If you have no prior knowledge, using 0.5 (50%) is a neutral starting point.

What is the difference between likelihood and probability?

Probability refers to the chance of an event occurring before you observe it. Likelihood, in the Bayesian context, refers to how probable the observed evidence is given a specific hypothesis. P(E|H) is the likelihood — it asks 'if the hypothesis is true, how likely is this evidence?' rather than 'how likely is the hypothesis?'

When should I use Bayes' Theorem?

Use Bayes' Theorem whenever you want to update a belief or probability estimate based on new data or evidence. Common applications include interpreting medical test results, spam filtering, machine learning classifiers, forensic evidence analysis, and A/B testing. It is especially valuable when base rates (prior probabilities) are very low or very high.

Why can a positive medical test still mean a low chance of having a disease?

This is the classic 'base rate fallacy'. If the disease is very rare (low prior probability), even a highly accurate test can produce mostly false positives. For example, with a 1% disease prevalence, a 90% sensitive and 95% specific test gives a posterior probability of only about 15% — meaning most positives are still false. This is why Bayes' Theorem is critical in medical diagnostics.

How is marginal probability P(E) calculated?

The marginal probability of the evidence P(E) accounts for all the ways the evidence could occur, whether the hypothesis is true or not. It is calculated as: P(E) = P(H) × P(E|H) + P(¬H) × P(E|¬H). This value is used as the denominator in Bayes' formula and ensures the posterior probabilities sum to one.

Can I use this calculator for A/B testing?

Yes. In a Bayesian A/B testing context, you can set a prior belief about your baseline conversion rate as the prior probability, the observed lift as the likelihood when true, and the false positive rate as the likelihood when false. The posterior probability then tells you the updated probability that your variant is genuinely better than the control.

More Statistics Tools