Prior Probability Calculator

Enter your prior probability P(H), likelihood of evidence given hypothesis true P(E|H), and likelihood of evidence given hypothesis false P(E|¬H) to calculate the posterior probability using Bayes' Theorem. You get back P(H|E) — the updated probability that your hypothesis is true after accounting for the evidence, plus a visual breakdown of the probability components.

The initial probability that the hypothesis is true before seeing the evidence. If unknown, use 0.5.

The probability of observing this evidence if the hypothesis IS true (e.g. test sensitivity).

The probability of observing this evidence if the hypothesis is NOT true (e.g. false positive rate).

Results

Posterior Probability P(H|E)

--

Posterior Probability (%)

--

Marginal Probability P(E)

--

Prior Complement P(¬H)

--

Probability Hypothesis is False P(¬H|E)

--

Posterior Probability Breakdown

Frequently Asked Questions

What is prior probability in Bayes' Theorem?

Prior probability, written as P(H), is your initial belief about how likely a hypothesis is to be true before you observe any new evidence. It represents existing knowledge or a best estimate — for example, the prevalence of a disease in the population before a test is performed. If you have no prior knowledge, a common default is 0.5 (50%).

What is Bayes' Theorem?

Bayes' Theorem is a mathematical formula that updates the probability of a hypothesis based on new evidence. The formula is: P(H|E) = [P(E|H) × P(H)] / P(E). It links prior probability, the likelihood of the evidence, and the resulting posterior probability. It is widely used in medical testing, machine learning, and scientific inference.

What is the difference between prior and posterior probability?

Prior probability P(H) is what you believe before seeing the evidence, while posterior probability P(H|E) is the updated belief after incorporating the evidence. Bayesian analysis is essentially the process of moving from a prior to a posterior as new data is observed.

What is the marginal probability P(E)?

The marginal probability P(E) is the total probability of observing the evidence under all possible hypotheses. It is calculated as: P(E) = P(H) × P(E|H) + P(¬H) × P(E|¬H). This value normalizes the calculation so the posterior probabilities sum to 1.

When should I use Bayes' Theorem?

Use Bayes' Theorem whenever you want to update a probability estimate based on new evidence. Common applications include interpreting medical test results (accounting for false positives), spam email filtering, risk assessment, and Bayesian machine learning models. It is especially valuable when base rates (prior probabilities) are low.

How does a low prior probability affect the posterior?

Even with a highly accurate test or strong evidence, a very low prior probability can keep the posterior probability surprisingly low. This is the base rate fallacy — for example, a 99% accurate test for a disease affecting 1 in 10,000 people still produces mostly false positives when applied to the general population.

What values should I enter if I have no prior knowledge?

If you have no prior information about whether the hypothesis is true or false, use a prior probability of 0.5. This represents maximum uncertainty — a 50/50 starting point — and lets the likelihoods of the evidence drive the posterior probability entirely.

Can I enter probabilities as percentages?

This calculator accepts probabilities as decimals between 0 and 1 (e.g. enter 0.3 for 30%). Make sure all three inputs — prior probability, likelihood when true, and likelihood when false — are expressed this way. The result is also returned as a decimal, with a percentage equivalent shown separately.

More Statistics Tools