we cope with classification algorithms in machine studying like Logistic Regression, Okay-Nearest Neighbors, Assist Vector Classifiers, and many others., we don’t use analysis metrics like Imply Absolute Error (MAE), Imply Squared Error (MSE) or Root Imply Squared Error (RMSE).
As a substitute, we generate a confusion matrix, and primarily based on the confusion matrix, a classification report.
On this weblog, we intention to grasp what a confusion matrix is, easy methods to calculate Accuracy, Precision, Recall and F1-Rating utilizing it, and easy methods to choose the related metric primarily based on the traits of the information.
To grasp the confusion matrix and classification metrics, let’s use the Breast Most cancers Wisconsin Dataset.
This dataset consists of 569 rows, and every row offers info on numerous options of a tumor together with its prognosis, whether or not it’s malignant (cancerous) or benign (non-cancerous).
Now let’s construct a classification mannequin for this information to categorise the tumors primarily based on their options.
We now apply Logistic Regression to coach a mannequin on this dataset.
Code:
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, classification_report
import seaborn as sns
import matplotlib.pyplot as plt
# Load the dataset
column_names = [
"id", "diagnosis", "radius_mean", "texture_mean", "perimeter_mean", "area_mean", "smoothness_mean",
"compactness_mean", "concavity_mean", "concave_points_mean", "symmetry_mean", "fractal_dimension_mean",
"radius_se", "texture_se", "perimeter_se", "area_se", "smoothness_se", "compactness_se", "concavity_se",
"concave_points_se", "symmetry_se", "fractal_dimension_se", "radius_worst", "texture_worst",
"perimeter_worst", "area_worst", "smoothness_worst", "compactness_worst", "concavity_worst",
"concave_points_worst", "symmetry_worst", "fractal_dimension_worst"
]
df = pd.read_csv("C:/wdbc.information", header=None, names=column_names)
# Drop ID column
df = df.drop(columns=["id"])
# Encode goal: M=1 (malignant), B=0 (benign)
df["diagnosis"] = df["diagnosis"].map({"M": 1, "B": 0})
# Break up options and goal
X = df.drop(columns=["diagnosis"])
y = df["diagnosis"]
# Practice-test break up
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=y, random_state=42)
# Scale the options
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.remodel(X_test)
# Practice logistic regression
mannequin = LogisticRegression(max_iter=10000)
mannequin.match(X_train, y_train)
# Predict
y_pred = mannequin.predict(X_test)
# Confusion Matrix and Classification Report
conf_matrix = confusion_matrix(y_test, y_pred, labels=[1, 0]) # 1 = Malignant, 0 = Benign
report = classification_report(y_test, y_pred, labels=[1, 0], target_names=["Malignant", "Benign"])
# Show outcomes
print("Confusion Matrix:n", conf_matrix)
print("nClassification Report:n", report)
# Plot Confusion Matrix
sns.heatmap(conf_matrix, annot=True, fmt="d", cmap="Purples", xticklabels=["Malignant", "Benign"], yticklabels=["Malignant", "Benign"])
plt.xlabel("Predicted")
plt.ylabel("Precise")
plt.title("Confusion Matrix")
plt.tight_layout()
plt.present()
Right here, after making use of logistic regression to the information, we generated a confusion matrix and a classification report to judge the mannequin’s efficiency.
First let’s perceive the confusion matrix

From the above confusion matrix
’60’ represents the appropriately predicted Malignant Tumors, which we discuss with as “True Positives”.
‘4’ represents the incorrectly predicted Benign Tumors which are literally Malignant Tumors, which we discuss with as “False Negatives”.
‘1’ represents the incorrectly predicted Malignant Tumors which are literally Benign Tumors, which we discuss with as “False Positives”.
‘106’ represents the appropriately predicted Benign Tumors, which we discuss with as “True Negatives”.
Now let’s see what we will do with these values.
For that we contemplate the classification report.

From the above classification report, we will say that
For Malignant:
– Precision is 0.98, which implies when the mannequin predicts the tumor as Malignant, it’s appropriate 98% of the time.
– Recall is 0.94, which implies the mannequin appropriately recognized 94% of all Malignant Tumors.
– F1-score is 0.96, which balances each the precision and recall.
For Benign:
– Precision is 0.96, which implies when the mannequin predicts the tumor as Benign, it’s appropriate 96% of the time.
– Recall is 0.99, which implies the mannequin appropriately recognized 99% of all Benign Tumors.
– F1-score is 0.98.
From the report we will observe that the accuracy of the mannequin is 97%.
We even have Macro Common and Weighted Common, let’s see how these are calculated.
Macro Common
Macro Common calculates the common of all metrics (precision, recall and f1-score) throughout each courses giving equal weight to every class, no matter what number of samples every class comprises.
We use macro common, after we need to know the efficiency of mannequin throughout all courses, ignoring class imbalances.
For this information:

Weighted Common
Weighted Common additionally calculates the common of all metrics however offers extra weight to the category with extra samples.
Within the above code, we used test_size = 0.3
, which implies we put aside 30% for testing which implies we’re utilizing 171 samples from a knowledge of 569 samples for a take a look at set.
The confusion matrix and classification report are primarily based on this take a look at set.
Out of 171 samples of take a look at set, now we have 64 Malignant tumors and 107 Benign tumors.
Now let’s see how this weighted common is calculated for all metrics.

Weighted common offers us a extra practical efficiency measure when now we have the category imbalanced datasets.
We now received an thought of every time period within the classification report and likewise easy methods to calculate the macro and weighted averages.
Now let’s see what’s using confusion matrix for producing a classification report.
In classification report now we have completely different metrics like accuracy, precision and many others. and these metrics are calculated utilizing the values within the confusion matrix.
From the confusion matrix now we have
True Positives (TP) = 60
False Negatives (FN) = 4
False Positives (FP) = 1
True Negatives (TN) = 106
Now let’s calculate the classification metrics utilizing these values.

That is how we calculate the classification metrics utilizing a confusion matrix.
However why do now we have 4 completely different classification metrics as an alternative of 1 metric like accuracy? It’s as a result of the completely different metrics present completely different strengths and weaknesses of the classifier primarily based on the context of the information.
Now let’s come again to the Wisconsin Breast Most cancers Dataset which we used right here.
After we utilized a logistic regression mannequin to this information, we received an accuracy of 97% which is excessive, which can make us assume that the mannequin is environment friendly.
However let’s contemplate one other metric known as ‘recall’ which is 0.94 for this mannequin, which implies out of all of the malignant tumors now we have within the take a look at set the mannequin was capable of determine 94% of them appropriately.
Right here the mannequin missed 6% of malignant instances.
In real-world eventualities, primarily healthcare purposes like most cancers detection, if we miss a constructive case, it would delay the prognosis and remedy.
By this we will perceive that even when now we have an accuracy of 97%, we have to look deeper primarily based on context of knowledge by contemplating completely different metrics.
So, what we will do now, ought to we intention for a recall worth of 1.0 which implies all of the malignant tumors are recognized appropriately, but when we push recall to 1.0 then the precision drops as a result of the mannequin might classify extra benign tumors as malignant.
When the mannequin classifies extra benign tumors as malignant, there can be pointless anxiousness, and it might require further assessments or therapies.
Right here we must always intention to maximise ‘recall’ by retaining the ‘precision’ moderately excessive.
We will do that by altering the thresholds set by classifiers to categorise the samples.
Many of the classifiers set the edge to 0.5, and if we alter it 0.3, we’re saying that even whether it is 30% assured, classify it as malignant.
Now let’s use a customized threshold of 0.3.
Code:
# Practice logistic regression
mannequin = LogisticRegression(max_iter=10000)
mannequin.match(X_train, y_train)
# Predict possibilities
y_probs = mannequin.predict_proba(X_test)[:, 1]
# Apply customized threshold
threshold = 0.3
y_pred_custom = (y_probs >= threshold).astype(int)
# Classification Report
report = classification_report(y_test, y_pred_custom, target_names=["Benign", "Malignant"])
# Confusion Matrix
conf_matrix = confusion_matrix(y_test, y_pred_custom, labels=[1, 0])
# Plot Confusion Matrix
plt.determine(figsize=(6, 4))
sns.heatmap(
conf_matrix,
annot=True,
fmt="d",
cmap="Purples",
xticklabels=["Malignant", "Benign"],
yticklabels=["Malignant", "Benign"]
)
plt.xlabel("Predicted")
plt.ylabel("Precise")
plt.title("Confusion Matrix (Threshold = 0.3)")
plt.tight_layout()
plt.present()
Right here we utilized a customized threshold of 0.3 and generated a confusion matrix and a classification report.

Classification Report:

Right here, the accuracy elevated to 98% and the recall for malignant elevated to 97% and the precision remained the identical.
We earlier mentioned that there is likely to be a lower in precision if we attempt to maximize the recall however right here the precision stays similar, this is determined by the information (whether or not balanced or not), preprocessing steps and tuning the edge.
For medical datasets like this, maximizing recall is usually most well-liked over accuracy or precision.
After we contemplate datasets like spam detection or fraud detection, we want precision and similar as in above methodology we attempt to enhance precision by tuning threshold accordingly and likewise by balancing the tradeoff between precision and recall.
We use f1-score when the information is imbalanced, and after we want each precision and recall the place neither false positives nor false negatives might be ignored.
Dataset Supply
Wisconsin Breast Most cancers Dataset
Wolberg, W., Mangasarian, O., Road, N., & Road, W. (1993). Breast Most cancers Wisconsin (Diagnostic) [Dataset]. UCI Machine Studying Repository. https://doi.org/10.24432/C5DW2B.
This dataset is licensed underneath a Inventive Commons Attribution 4.0 Worldwide (CC BY 4.0) license and is free to make use of for industrial or academic functions so long as correct credit score is given to authentic supply.
Right here we mentioned what a confusion matrix is and the way it’s used to calculate the completely different classification metrics like accuracy, precision, recall and f1-score.
We additionally explored when to prioritize which classification metric, utilizing the Wisconsin most cancers dataset for example, the place we most well-liked maximizing recall.
I hope you discovered this weblog useful in understanding confusion matrix and classification metrics extra clearly.
Thanks for studying.