ROC Curve Calculator

Enter your sensitivity and specificity values to compute ROC Curve metrics. Provide a single operating point or a series of threshold-based values, and get back the AUC (Area Under the Curve), AUC lower bound, AUC upper bound, true positive rate, and false positive rate. The chart plots your ROC curve visually so you can evaluate classifier performance at a glance.

Proportion of actual positives correctly identified (0 to 1).

Proportion of actual negatives correctly identified (0 to 1).

Relative cost of a false positive classification.

Relative cost of a false negative classification.

Results

Estimated AUC

--

AUC Lower Bound

--

AUC Upper Bound

--

True Positive Rate (Sensitivity)

--

False Positive Rate (1 − Specificity)

--

Cost-Weighted Slope at Operating Point

--

ROC Curve

Results Table

Frequently Asked Questions

What is an ROC curve?

A Receiver Operating Characteristic (ROC) curve is a graphical representation of a binary classifier's performance across all discrimination thresholds. It plots the True Positive Rate (sensitivity) on the Y-axis against the False Positive Rate (1 − specificity) on the X-axis. A perfect classifier hugs the top-left corner, while a random classifier follows the diagonal line.

What is AUC and how is it interpreted?

AUC stands for Area Under the ROC Curve. It ranges from 0 to 1: an AUC of 1.0 indicates a perfect classifier, 0.5 indicates no discriminative ability (equivalent to random guessing), and values below 0.5 suggest the model is performing worse than chance. Generally, AUC ≥ 0.8 is considered good and AUC ≥ 0.9 is considered excellent.

When is an ROC curve used?

ROC curves are used whenever you need to evaluate the performance of a binary classification model — for example, in medical diagnostics (disease vs. no disease), fraud detection, or spam filtering. They are especially useful when the class distribution is imbalanced, because they are independent of the class prevalence.

How is the AUC estimated from a single sensitivity/specificity point?

Using the geometric method by Zhang and Mueller (2005), the AUC can be bounded from a single operating point. The lower bound is the area of the quadrilateral formed between (0,0), the point (FPR, TPR), (1,1), and (1,0), equal to (sensitivity + specificity) / 2. The upper bound depends on which region the point falls in relative to the diagonal. The midpoint estimate averages these bounds.

What threshold should be used for a classifier?

The optimal threshold depends on the relative costs of false positives and false negatives. In cost-sensitive applications (e.g., medical diagnosis where missing a disease is far more costly than a false alarm), the threshold should be shifted to favor sensitivity. The cost-weighted slope at your operating point in this calculator helps quantify that trade-off.

What is the difference between sensitivity and specificity?

Sensitivity (True Positive Rate) measures the proportion of actual positives correctly identified by the model. Specificity (True Negative Rate) measures the proportion of actual negatives correctly identified. Together, they characterize the operating point on the ROC curve. High sensitivity is critical when missing a positive is costly; high specificity matters when false alarms carry high cost.

How do I enter data for the ROC curve calculator?

Enter your sensitivity value (between 0 and 1) and specificity value (between 0 and 1) representing a single operating point of your classifier. You can also adjust the confidence level and the relative costs of false positives and false negatives. The calculator will compute the AUC bounds and display the ROC curve chart and coordinate table automatically.

What does the confidence interval around AUC mean?

The AUC confidence interval gives a range within which the true AUC likely falls, given the uncertainty from a single operating point. A 95% confidence interval means that if you repeated your measurement many times, 95% of the resulting intervals would contain the true AUC. Wider intervals indicate more uncertainty; more data points reduce this uncertainty.

More Statistics Tools