False Positive Paradox Calculator

Enter a test's sensitivity, specificity, and disease prevalence to uncover the false positive paradox. See how even a highly accurate test can produce more false alarms than true positives in a low-prevalence population. Your results include Positive Predictive Value (PPV), false positive rate, true/false counts per 10,000 people, and a breakdown chart showing exactly why 'accurate' can still be misleading.

%

What percentage of the population actually has the condition? E.g. 1% means 1 in 100 people are truly positive.

%

The probability the test correctly identifies a person who has the condition. Also called recall or true positive rate.

%

The probability the test correctly identifies a person who does NOT have the condition. A low specificity means more false positives.

Total number of people tested. Used to show absolute counts of true/false positives and negatives.

Results

Positive Predictive Value (PPV)

--

False Positive Rate

--

Negative Predictive Value (NPV)

--

True Positives (in population)

--

False Positives (in population)

--

True Negatives (in population)

--

False Negatives (in population)

--

Breakdown of All Test Results in Population

Results Table

Frequently Asked Questions

What is the false positive paradox?

The false positive paradox occurs when a test with high accuracy still produces more false positives than true positives in a population where the condition is rare. Even a 99% accurate test will flag many healthy people as sick when the disease prevalence is very low, because there are so many more healthy people to falsely flag. This is the most common example of base rate fallacy.

How do I calculate false positives?

False positives are calculated as: False Positives = (1 − Specificity) × (1 − Prevalence) × Population. For example, if specificity is 98% and prevalence is 1% in a population of 10,000, then False Positives = 0.02 × 0.99 × 10,000 = 198 people incorrectly diagnosed as sick.

What is the Positive Predictive Value (PPV) and how is it calculated?

PPV is the probability that a person who tests positive actually has the condition. It is calculated using Bayes' theorem: PPV = (Sensitivity × Prevalence) / [(Sensitivity × Prevalence) + ((1 − Specificity) × (1 − Prevalence))]. A low PPV means most positive test results are false alarms.

What is the false positive rate?

The false positive rate equals 1 − Specificity. It represents the proportion of healthy people who are incorrectly identified as having the condition. For example, a specificity of 98% means a false positive rate of 2%. In a large population with low disease prevalence, even a 2% false positive rate can generate thousands of incorrect diagnoses.

What are the chances of a false positive HIV test?

HIV tests have very high sensitivity and specificity (often above 99%). However, because HIV prevalence in many general populations is below 1%, a single positive result can still have a relatively low PPV — meaning a significant proportion of positive tests in low-risk groups may be false positives. Confirmatory testing with a different method is always recommended.

What is the base rate fallacy?

The base rate fallacy (or base rate neglect) is the tendency to focus on specific test accuracy statistics like sensitivity and specificity while ignoring how rare the condition is in the population being tested. When prevalence is low, even a highly specific test will produce many false positives relative to true positives — a fact that is frequently overlooked.

How do sensitivity and specificity differ from PPV and NPV?

Sensitivity and specificity are fixed properties of the test itself — they describe how well it performs on sick and healthy individuals respectively. PPV and NPV, however, depend on disease prevalence in the tested population. The same test can have a very different PPV when applied to a high-risk versus a low-risk group, which is why prevalence context is critical.

How can the false positive paradox be reduced?

Two main strategies help: first, improve the test's specificity so fewer healthy people are misclassified; second, apply the test only to higher-risk populations where prevalence is greater, which raises the PPV substantially. Sequential testing (confirming a positive with a second independent test) also dramatically reduces the probability that a positive result is a false alarm.

More Statistics Tools