AUC Calculator (Area Under Curve)

Enter your true positive rate (TPR/Sensitivity) and false positive rate (FPR/1-Specificity) data points to compute the Area Under the ROC Curve (AUC). Paste comma-separated FPR and TPR values, and this AUC Calculator returns the AUC score, AUC interpretation, and a visual ROC curve chart showing your classifier's discrimination ability.

Enter FPR (1 - Specificity) values separated by commas. Must start with 0 and end with 1.

Enter TPR (Sensitivity) values separated by commas. Must match the number of FPR values.

Select the confidence level for AUC interval estimation.

Used to estimate the confidence interval. Leave at default if unknown.

Used to estimate the confidence interval. Leave at default if unknown.

Results

AUC Score

--

Classification Performance

--

95% CI Lower Bound

--

95% CI Upper Bound

--

Number of Data Points

--

ROC Curve (TPR vs FPR)

Results Table

Frequently Asked Questions

What is the Area Under the ROC Curve (AUC)?

The AUC (Area Under the Receiver Operating Characteristic Curve) measures the overall ability of a classification model to discriminate between positive and negative outcomes. It ranges from 0.5 (no discrimination, equivalent to random guessing) to 1.0 (perfect discrimination). A higher AUC indicates a better-performing model.

What do the FPR and TPR values represent?

FPR (False Positive Rate), also called 1 - Specificity, is the proportion of actual negatives incorrectly classified as positive. TPR (True Positive Rate), also called Sensitivity or Recall, is the proportion of actual positives correctly identified. Together they define each point on the ROC curve at a given classification threshold.

How do I interpret the AUC score?

AUC values are interpreted as follows: 0.90–1.00 = Excellent, 0.80–0.90 = Good, 0.70–0.80 = Fair, 0.60–0.70 = Poor, and 0.50–0.60 = Fail (no better than chance). An AUC of 0.85 means there is an 85% chance the model will correctly distinguish a randomly chosen positive case from a randomly chosen negative case.

How is the AUC calculated in this tool?

This calculator uses the Trapezoidal Rule to estimate the AUC. Each consecutive pair of (FPR, TPR) points forms a trapezoid, and the sum of all trapezoid areas gives the total AUC. This is the standard numerical integration method used in most statistical software and ROC analysis tools.

What is the confidence interval for AUC?

The confidence interval provides a range within which the true population AUC likely falls at your chosen confidence level (e.g., 95%). It is estimated using the Hanley-McNeil method, which requires the number of positive and negative cases. Wider intervals indicate more uncertainty, typically due to smaller sample sizes.

What threshold should be used to maximize model performance?

The optimal threshold is usually found at the point on the ROC curve closest to the top-left corner (0,1), which maximizes both Sensitivity and Specificity simultaneously. However, the ideal threshold depends on the relative costs of false positives vs. false negatives in your specific application. In medical contexts, missing a positive case (false negative) is often far more costly than a false positive.

How many data points do I need to enter?

You need at least 3 data points (including the origin at FPR=0, TPR=0 and the endpoint at FPR=1, TPR=1) to compute a meaningful AUC. More points give a more accurate and smoother ROC curve. The FPR and TPR arrays must have the same number of values, and FPR values should be in ascending order from 0 to 1.

What is the difference between AUC-ROC and AUC-PR?

AUC-ROC measures discrimination across all thresholds using FPR and TPR, and is suitable when class distributions are roughly balanced. AUC-PR (Area Under the Precision-Recall Curve) is preferred for highly imbalanced datasets where the positive class is rare, such as fraud detection or rare disease diagnosis. This calculator computes the AUC-ROC.

More Statistics Tools