G

Out-of-distribution

Harnessing Neural Networks for Handling Out-of-Distribution Data

Optimal performance in various AI applications often hinges on being able to statistically or adversarially identify test samples that differ substantially from the training data distribution. Neural networks (DNNs) are known for providing accurate results in a range of tasks, including speech recognition, object detection, and image classification. Nonetheless, gauging prediction uncertainty remains a complex challenge. Well-adjusted predictive certainty is incredibly valuable as it can be applied in a broad spectrum of AI tasks.

Risks and Challenges with Neural Networks

While neural networks with the softmax classifier tend to produce exceedingly assured predictions, these systems are fraught with risks, especially when used in areas of low tolerance, such as robotics or medicine where potential errors could lead to disastrous outcomes. An effective AI should minimize risks by identifying generalizations for OOD instances, highlighting those beyond its ken and recommending human intervention.

Since in-distribution samples are likely harboring the same false patterns as OOD samples, neural network models may be heavily reliant on the inherent spurious cues and annotation artifacts in the unique training data. This reliance limits the model’s ability to generalize as no training data can encapsulate all the nuances of a distribution.

Understanding Out-of-Distribution (OOD)

The concept of “distribution” bears somewhat different connotations for Language and Vision tasks. For instance, while trying to classify photographs of various cat breeds, images of cats represent in-distribution data, whereas photographs of dogs, humans, or other unrelated objects fall under OOD. Notably, data distribution in real-world tasks often evolves over time, rendering the tracking of a growing data distribution expensive. Monitoring OOD is critical in preventing AI systems from erroneous predictions.

Pioneering OOD Detection Techniques

Ensemble Learning

In Ensemble Learning, models are used to form predictions for each data point. These individual predictions are then collated to enhance overall performance. Some popular methods to merge decisions include simple averaging, weighted averaging, and maximum voting, which bases the final prediction on the majority consensus of model predictions.

Binary Classification Model

This method involves judging the trained model on a reserved dataset. Well-answered examples are marked as positive and poorly answered instances as negative. Following this process, a binary classification model can be trained to predict whether incoming samples fall under the positive or negative category. While this technique is more suited for Success and Error Prediction, it can be tweaked for detecting out-of-distribution by including OOD instances during calibrator training.

MaxProb and Temperature Scaling

The outputs of a neural network model for classification tasks, referred to as logits, are processed through a softmax function to obtain class probabilities. The highest softmax probability is used to calculate prediction confidence.

Temperature scaling is a basic yet effective method for detecting out-of-distribution. This involves adjusting the softmax function's prediction confidence through a process known as Platt scaling, which uses a single scalar parameter T > 0. Importantly, temperature scaling does not impact the model's precision, as the softmax function's maximum is not altered by the parameter T.

Integrate | Scan | Test | Automate

Detect hidden vulnerabilities in ML models, from tabular to LLMs, before moving to production.