False Positive Paradox (Medical)

Enter your test's sensitivity, specificity, and the disease prevalence (base rate) to see the False Positive Paradox in action. The calculator reveals the true Positive Predictive Value (PPV) — the actual probability you have the disease after a positive test — along with the breakdown of true positives, false positives, true negatives, and false negatives across a simulated population of 100,000 people.

%

The percentage of the population that actually has the disease. For rare diseases, this is very small (e.g. 0.1%).

The number of people tested in the simulation.

%

The probability that the test correctly identifies a person who HAS the disease. Formula: True Positives / (True Positives + False Negatives).

%

The probability that the test correctly identifies a person who does NOT have the disease. Formula: True Negatives / (True Negatives + False Positives).

Results

Positive Predictive Value (PPV)

--

True Positives

--

False Positives

--

True Negatives

--

False Negatives

--

Negative Predictive Value (NPV)

--

False Positive Rate (of all positive tests)

--

Test Result Breakdown (Simulated Population)

Results Table

Frequently Asked Questions

What is the false positive paradox?

The false positive paradox occurs when a test with high accuracy still produces more false positives than true positives when the disease being tested for is rare in the population. Even a test that is 99% accurate can be misleading if the condition affects only 0.1% of people — the majority of positive results can be false alarms. This is a direct consequence of base rate fallacy, where people ignore prevalence and focus only on test accuracy.

What are the chances of a false positive HIV test?

Modern HIV tests have sensitivity and specificity both exceeding 99%. However, in a low-prevalence population (e.g. 0.1% prevalence), as few as 9–10% of positive results may actually be true positives — meaning around 90% could be false positives. In higher-prevalence populations (e.g. 1–5%), the PPV rises dramatically. This is why confirmatory testing is always recommended after an initial positive result.

What is the Positive Predictive Value (PPV) and how is it calculated?

PPV is the probability that a person who tests positive actually has the disease. Using Bayes' theorem: PPV = (Sensitivity × Prevalence) / [(Sensitivity × Prevalence) + (1 − Specificity) × (1 − Prevalence)]. It depends not just on test accuracy but critically on the base rate (prevalence) of the disease in the tested population.

How do I calculate sensitivity and specificity?

Sensitivity = True Positives / (True Positives + False Negatives) — it measures how well a test detects people who actually have the disease. Specificity = True Negatives / (True Negatives + False Positives) — it measures how well a test correctly rules out people without the disease. Both are properties of the test itself and do not depend on prevalence.

What is the Negative Predictive Value (NPV)?

NPV is the probability that a person who tests negative truly does not have the disease. It is calculated as: NPV = (Specificity × (1 − Prevalence)) / [(Specificity × (1 − Prevalence)) + (1 − Sensitivity) × Prevalence]. In low-prevalence diseases, NPV tends to be very high, meaning a negative test is very reassuring.

Which is an example of base rate fallacy?

A classic example: a doctor tells a patient they tested positive for a rare disease affecting 1 in 1,000 people, using a test that is 99% accurate. Most people — and even many doctors — assume there's a 99% chance the patient has the disease. In reality, because the disease is so rare, the true probability may be closer to 9%, because the number of false positives vastly outnumbers true positives. Ignoring the low prevalence (base rate) is the base rate fallacy.

How can you overcome the false positive paradox?

Two main strategies help: (1) Use confirmatory testing — if an initial positive result is followed by a second independent test, the combined probability of being a true positive rises dramatically. (2) Screen higher-risk populations — targeting testing at groups with higher prevalence raises the base rate and therefore the PPV, making positive results more meaningful and reducing false positive rates.

Why does disease prevalence affect the reliability of a positive test?

Even with a highly specific test, when a disease is very rare, the number of healthy people who test false-positive can greatly exceed the number of truly sick people who test true-positive. For example, testing 1,000,000 people where only 100 have a disease — even a 99% specific test will incorrectly flag ~9,999 healthy people as positive, swamping the 99 true positives. Prevalence is the hidden lever that drives PPV.

More Statistics Tools