G

Recall in Machine Learning

For achieving a reliable accuracy score in your machine learning model, you need to apply tools like the confusion matrix, recall, and precision. These measures help mitigate misleading output results and ensure the efficacy of the model's predictions. These aspects are vital for enhancing the model's credibility and precision, which is crucial for accurate decision making based on its findings.

The confusion matrix proves to be particularly useful, especially in measuring the trade-offs between different choices. Giving your model inputs and getting precise, error-free outputs is the ideal scenario. The confusion matrix facilities binary and non-binary classifications effectively. In binary classification, the model is tasked to make a choice between two available options (yes or no, true or false, right or left). Erroneous predictions yield either a false-positive or false-negative outcome. For instance, if the model's prediction is 'yes' while the real outcome is 'no', we get a false positive. The opposite gives false negatives.

Using Recall in Binary Classification

In binary classification — where only two classifications are considered— and imbalance occurs in the problem at hand, the recall is calculated using this formula:

Recall = Number of True Positives / (Number of True Positives + Number of False Negatives)

Values usually range from 0.0 (no recall) to 1.0 (full recall). Let's illustrate this principle using a practical example. Let's consider a dataset with a ratio of 1 minority to 1000 majorities (1:1000) and 1,000,000 majority class examples.

Within the given dataset, suppose the machine learning model accurately predicts 950 instances and erroneously predicts 50. The recall calculation for this model would therefore be:

Recall = 950 / (950 + 50) → Recall = 950 / 1000 → Recall = 0.95

This model scores nearly perfect in recall.

Using Recall in Multi-Class Classification

Recall is not only confined to binary classifications. It's also applicable to multi-class classifications. In such cases, recall is calculated as:

Recall = True Positives in all classes / (True Positives + False Negatives in all classes)

Let's use a similar dataset as the previous example, this time with 1,000,000 majority class examples and two positive classes. Suppose our model correctly predicts 850 examples (with 150 mistaken) in class 1, and 900 correct (with 100 mistaken) for class 2.

The recall calculation then is:

Recall = (850+900) / ((850+900) +(150+100)) → Recall = 1750 / (1750 + 250) → Recall = 1750 / 2000 → Recall = 0.875

Notice that as the recall score rises, the precision score drops. Therefore, if your objective is to decrease false negatives in your imbalanced classification model, then recall is ideal for you. However, you must be aware of the trade-off between recall and precision.

We also need to discuss Precision, which focuses on the number of True Positives (TP) that are classified within the confusion matrix. If there are no False Positives (FP), then the precision score is 100%. Of course, if there are more FPs, then the precision score will be affected negatively.

Precision = True Positives / (True Positives + False Positives)

Recall vs. Precision

In imbalanced classification issues, both the recall and precision are appropriate metrics instead of just relying on the model's accuracy. However, you may need to emphasize one over the other based on your needs. It's important to note that you can't maximize both precision and recall simultaneously; one comes at the expense of the other. In certain situations, the F1 score can be used to balance the two metrics.

F1 = 2 * (precision*recall / (precision + recall))

Integrate | Scan | Test | Automate

Detect hidden vulnerabilities in ML models, from tabular to LLMs, before moving to production.