Knowledge
Blog
January 7, 2022
5
mn
read
Jean-Marie John-Mathews, Ph.D.

Where do biases in ML come from? #6 🐝 Emergent bias

Emergent biases result from the use of AI / ML across unanticipated contexts. It introduces risk when the context shifts.
A shift

In this post, we focus on emergent bias among the most commonplace biases in AI.

According to Friedman et al. (1996), emergent biases are the result of the use of AI across unanticipated contexts. These do not exist in the technology per se. Instead, they emerge because of the interaction between the technology and the users.

The more interactive and smarter the technology becomes; the more common emergent biases are. This is why AI is the place for emergent biases.

Practically, it means that training data do not align with contexts that the algorithm encounters in the real world. Here are some shifting contexts that may lead to emergent biases:

❌ Cultural shift

In the merchandise sales application, one reason for emergent biases is seasonality. It means that shopping behavior changes seasonally. For example, there might be higher sales in the winter holiday season than during the summer. In this case, an AI model predicting sales will produce wrong results if it’s trained in winter and applied in summer.

❌ Market shift

AI models are often used in finance and trading to take automatic positions in the markets. But these models break when markets radically change because they assume that past correlations are indicative of the future. An illustrative example is Zillow’s house price prediction algorithm. “Zillow algos failed to take into account the recent slowdown in home price appreciation — even as price gains cooled, Zillow kept buying more homes and paying more for them”, the FT report.

❌ Moral shift

A well-known example is the artificial intelligence Twitter chatbot named Tay, developed by Microsoft. Within less than a day, Tay became a racist and sexist neo-Nazi bot, forcing Microsoft to remove it. Here the biases do not come from bad programming, they emerge from the interaction with people. And when human users are trolls and act like white supremacists, Tay quickly adapts to mimic people.

Here are some remedies for emergent biases:

✅ Reactive solutions

These solutions consist of retraining the model in reaction to a triggering mechanism, such as a decrease in prediction accuracy or a change in the statistics of the data-generating process. A shortcoming of reactive approaches is that performance may decay until the change is detected.

✅ Tracking solutions

These solutions consist of continually updating the model by retraining it without a triggering mechanism. Incremental learning from Elwell & Polikar (2011) might be a good solution. It consists in maintaining a set of classifiers, where one new classifier is trained on the most recent batch of examples and replaces the oldest classifier in the ensemble.

At Giskard, we help AI practitioners to detect shifting contexts before they lead to emergent biases after they are deployed.

Bibliography

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems

Elwell, R., & Polikar, R. (2011). Incremental learning of concept drift in nonstationary environments. IEEE Transactions on Neural Networks

You will also like

Presentation bias

Where do biases in ML come from? #7 📚 Presentation

We explain presentation bias, a negative effect present in almost all ML systems with User Interfaces (UI)

View post
Raised hands

Where do biases in ML come from? #5 🗼 Structural bias

Social, political, economic, and post-colonial asymmetries introduce risk to AI / ML development

View post
Orange picking

Where do biases in ML come from? #4 📊 Selection

Selection bias happens when your data is not representative of the situation to analyze, introducing risk to AI / ML systems

View post
Stay updated with
the Giskard Newsletter