Matthews Correlation Coefficient Calculator

Enter your True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN) from a binary classification model to calculate the Matthews Correlation Coefficient (MCC). You also get Sensitivity, Specificity, Precision, and Accuracy — a full picture of your classifier's performance in one place.

Cases correctly predicted as positive

Cases correctly predicted as negative

Cases incorrectly predicted as positive (Type I error)

Cases incorrectly predicted as negative (Type II error)

Results

Matthews Correlation Coefficient (MCC)

--

Sensitivity (Recall)

--

Specificity

--

Precision (PPV)

--

Accuracy

--

Total Observations

--

Classification Metrics Overview

Results Table

Frequently Asked Questions

What is the Matthews Correlation Coefficient (MCC)?

The Matthews Correlation Coefficient is a measure of the quality of binary classifications. Introduced by biochemist Brian W. Matthews in 1975, it accounts for true and false positives and negatives simultaneously, returning a value between -1 and +1. A value of +1 indicates a perfect prediction, 0 means no better than random chance, and -1 indicates total disagreement between prediction and observation.

How is the Matthews Correlation Coefficient calculated?

The MCC formula is: MCC = [(TP × TN) − (FP × FN)] / √[(TP + FP)(TP + FN)(TN + FP)(TN + FN)]. You need four values from a confusion matrix: True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN). If the denominator is zero, MCC is defined as 0 to avoid division errors.

Why is MCC considered better than accuracy for imbalanced datasets?

Accuracy can be misleading when class sizes are very different — for example, a classifier that always predicts 'negative' on a dataset with 95% negative samples would achieve 95% accuracy but be useless. MCC accounts for all four quadrants of the confusion matrix, making it a more reliable metric when class distributions are unbalanced.

What is a good MCC value?

MCC ranges from -1 to +1. Values closer to +1 indicate stronger predictive performance, values near 0 suggest the classifier is no better than random guessing, and negative values indicate the predictions are systematically wrong. In practice, an MCC above 0.5 is generally considered good, and above 0.7 is considered strong for many real-world applications.

What is the difference between Sensitivity and Specificity?

Sensitivity (also called Recall) measures the proportion of actual positives correctly identified: TP / (TP + FN). Specificity measures the proportion of actual negatives correctly identified: TN / (TN + FP). A good classifier aims to maximize both, though there is often a trade-off between the two depending on the application.

What does Precision mean in binary classification?

Precision (also called Positive Predictive Value) measures how many of the predicted positives are actually positive: TP / (TP + FP). High precision means the model generates few false alarms. It is especially important in contexts where false positives are costly, such as spam detection or medical diagnostics.

Can MCC be used for multiclass classification?

The standard MCC formula is designed for binary (two-class) classification. However, extensions of the metric exist for multiclass problems, sometimes referred to as the multiclass MCC or RK statistic. This calculator covers the binary classification case using a standard 2×2 confusion matrix.

What happens if the denominator in the MCC formula is zero?

If any of the sums in the denominator — (TP+FP), (TP+FN), (TN+FP), or (TN+FN) — equals zero, the denominator becomes zero. By convention, MCC is set to 0 in this case, representing no predictive information. This typically happens when one of the confusion matrix quadrants is entirely empty.

More Statistics Tools