Bayes' Theorem Calculator

Enter the prior probability P(A), marginal probability P(B), and likelihood P(B|A) to compute the posterior probability P(A|B) using Bayes' Theorem. You can also select your goal to solve for any of the four probabilities — P(A), P(B), P(A|B), or P(B|A) — given the remaining three values. Results include a step-by-step breakdown and a visual chart of how prior and likelihood combine into the posterior.

Select which probability you want to calculate.

The initial probability of event A before seeing the evidence. Enter a value between 0 and 1.

The total probability of observing evidence B. Enter a value between 0 and 1.

The probability of observing evidence B given that hypothesis A is true.

The probability of A given B. Required only when solving for P(A), P(B), or P(B|A).

Results

Calculated Probability

--

Result as Percentage

--

P(not A)

--

P(B|not A)

--

Solving For

--

Probability Breakdown

Frequently Asked Questions

What is Bayes' Theorem?

Bayes' Theorem is a mathematical formula for updating the probability of a hypothesis based on new evidence. It relates the posterior probability P(A|B) to the prior probability P(A), the likelihood P(B|A), and the marginal probability P(B). Named after Reverend Thomas Bayes, it is a foundational concept in statistics, machine learning, and scientific inference.

What is the Bayes' Theorem formula?

The core formula is: P(A|B) = [P(B|A) × P(A)] / P(B). Equivalently, since P(B) = P(A) × P(B|A) + P(not A) × P(B|not A), you can expand the denominator when P(B) is not directly known. Each term has a specific role: P(A) is the prior, P(B|A) is the likelihood, P(B) is the marginal probability, and P(A|B) is the posterior.

When should I use Bayes' Theorem?

Use Bayes' Theorem whenever you want to update a belief or probability estimate in light of new evidence. Common applications include interpreting medical test results (e.g., the probability of having a disease given a positive test), spam email filtering, machine learning classifiers, and any scenario involving conditional probabilities where you know some related probabilities and need to infer others.

How do I use this Bayes' Theorem Calculator?

First, select what you want to solve for using the 'Solve For' dropdown — P(A|B), P(B|A), P(A), or P(B). Then enter the three known probability values in the corresponding fields. All probabilities must be between 0 and 1. The calculator instantly computes the unknown probability along with a percentage representation and a visual chart.

Why can a medical test seem unreliable even with high accuracy?

This is the classic base-rate fallacy. If a disease affects only 1% of the population (low prior P(A)) but the test is 90% accurate, a positive result still has a relatively low probability of indicating true disease — because most positive results come from the large healthy population. Bayes' Theorem correctly accounts for the base rate and shows why even high-accuracy tests can yield many false positives in low-prevalence conditions.

What is the difference between prior and posterior probability?

The prior probability P(A) is your belief about an event before observing any evidence — it reflects existing knowledge or base rates. The posterior probability P(A|B) is the updated belief after incorporating the new evidence B. Bayes' Theorem is the mechanism that transforms the prior into the posterior using the likelihood of the evidence.

What happens if Bayes' Theorem produces a probability greater than 1?

A result greater than 1 means the input values are inconsistent — for example, P(B) is set too low relative to P(A) and P(B|A). All valid probabilities must be between 0 and 1, and P(B) must satisfy P(B) ≥ P(A) × P(B|A). Double-check that your inputs are logically consistent and properly normalized before interpreting results.

What is Bayesian inference used for in real life?

Bayesian inference powers a wide range of real-world applications: medical diagnosis and clinical decision support, spam and malware detection, natural language processing and text classification, A/B testing and conversion rate optimization, financial risk modeling, and scientific hypothesis testing. Any field that requires reasoning under uncertainty can benefit from the Bayesian framework.

More Statistics Tools