Knowledge
Blog
November 9, 2021
5
mn
read
Jean-Marie John-Mathews, Ph.D.

Where do biases in ML come from? #3 📏 Measurement

Machine Learning systems are particularly sensitive to measurement bias. Calibrate your AI / ML models to avoid that risk.
Ruler to measure

In this post, we focus on one of the most important biases: measurement 📏

Data is the result of measurements that are done either by a human or a machine. Noise is inherent to every measurement. Usually, it is possible to get rid of the measurement noise by using aggregated measurement points.

Unfortunately, this technique does not really work in real ML projects. Noise is not random with respect to the event we want to predict. Put differently, measurement biases happen when the measurement noise is correlated with the target variable.

Here are some examples:

❌ In image recognition, the training data may be collected by a different type of camera than the one used for production.

❌ In #NLP, data labelling may be influenced by workers’ regional context. This induces inconsistent annotation, leading to measurement bias.

Fortunately, physics, and especially metrology, give a method to detect measurement bias: calibration. It is the act of comparing measurements values with standards of known accuracy.

There are several ways to apply calibration in Machine Learning:

✅ Always compare the output of different data collection processes. To do that, use monitoring tools to assess changes of data distributions.

✅ Provide best practices and clear guidelines for your data collection process.

At Giskard, we help AI professionals detect measurement biases by enriching the modeling process with new reference points.

You will also like

Orange picking

Where do biases in ML come from? #4 📊 Selection

Selection bias happens when your data is not representative of the situation to analyze, introducing risk to AI / ML systems

View post
Raised hands

Where do biases in ML come from? #5 🗼 Structural bias

Social, political, economic, and post-colonial asymmetries introduce risk to AI / ML development

View post
A shift

Where do biases in ML come from? #6 🐝 Emergent bias

Emergent biases result from the use of AI / ML across unanticipated contexts. It introduces risk when the context shifts.

View post
Stay updated with
the Giskard Newsletter