Bayes' Theorem Calculator

Enter your prior probability P(A), likelihood P(B|A), and complement likelihood P(B|¬A) to compute the posterior probability P(A|B) using Bayes' Theorem. The calculator also returns the marginal probability P(B) and P(¬A|B), with a visual breakdown of how evidence updates your belief.

The initial probability of hypothesis A before seeing any evidence. Must be between 0 and 1.

Probability of observing evidence B if hypothesis A is true (e.g. true positive / sensitivity).

Probability of observing evidence B if hypothesis A is false (e.g. false positive rate).

Results

Posterior Probability P(A|B)

--

P(¬A|B)

--

Marginal Probability P(B)

--

Prior P(¬A)

--

Posterior P(A|B) as %

--

Posterior Probability Breakdown

Frequently Asked Questions

What is Bayes' Theorem?

Bayes' Theorem is a mathematical formula that describes how to update the probability of a hypothesis based on new evidence. It combines a prior belief (P(A)) with the likelihood of observing the evidence (P(B|A)) to produce a revised posterior probability (P(A|B)). It is widely used in medical testing, machine learning, spam filtering, and statistical inference.

What is the Bayes' Theorem formula?

The formula is: P(A|B) = [P(B|A) × P(A)] / P(B), where P(B) = P(A) × P(B|A) + P(¬A) × P(B|¬A). P(A) is the prior probability, P(B|A) is the likelihood, P(B|¬A) is the false positive rate, and P(A|B) is the resulting posterior probability.

When should I use Bayes' Theorem?

Use Bayes' Theorem whenever you want to update a probability estimate based on new evidence. Common scenarios include interpreting medical test results, assessing the reliability of a classification model, and making decisions under uncertainty where prior knowledge is available.

What is a prior probability?

The prior probability P(A) represents your initial belief in a hypothesis before observing any evidence. For example, if 5% of people in a population have a disease, P(A) = 0.05 is the prior probability that a randomly selected person has it.

What is the difference between likelihood and posterior probability?

The likelihood P(B|A) is the probability of seeing the evidence if the hypothesis is already true — for example, the sensitivity of a medical test. The posterior P(A|B) is what you actually want to know: given that we observed the evidence, how probable is the hypothesis? Bayes' Theorem converts the likelihood into the posterior.

Why can a highly accurate test still give many false positives?

When the prior probability of a condition is very low (rare disease), even a test with high sensitivity can produce a posterior probability that is surprisingly low. Most positive results may be false positives because the base rate of the disease is so small. This is called the base rate fallacy and is a key insight from Bayes' Theorem.

What if Bayes' Theorem gives a probability greater than 1?

A result greater than 1 indicates invalid input values. All probabilities must be between 0 and 1, and your likelihoods and prior must be logically consistent. Double-check that P(A), P(B|A), and P(B|¬A) are each within the [0, 1] range.

What are real-life applications of Bayes' Theorem?

Bayes' Theorem underpins spam email filters, medical diagnosis, weather forecasting, fraud detection, and Bayesian machine learning models. Any situation where you need to revise a probability estimate in light of new data is a candidate for Bayesian reasoning.

More Math Tools