G
Blog
December 14, 2021
3 min read

Where do biases in ML come from? #5 🗼 Structural bias

Social, political, economic, and post-colonial asymmetries introduce risk to AI / ML development

Raised hands
Jean-Marie John-Mathews, Ph.D.
Raised hands
Raised hands

In this post, we focus on structural biases 🕵️‍♂️

Structural biases are one of the hardest issues in AI. They come from systemic and historical power relations between humans. As pointed by many sociologists, those structural asymmetries can be encapsulated by institutions, such as states, organizations, and economic markets. AI development as a human construction is not an exception: it is deeply rooted in a social world that is far from perfect. AI systems can not only encapsulate but maintain these structural asymmetries.

Here are some examples of structural power relations interacting with AI development:

❌ Social asymmetries

According to Acemoglu (2021), automation technology has been the primary driver in U.S. income inequality over the past 40 years. It tends to create a two-speed society where the privileged social classes benefit the most from technology spillovers.

❌ Political asymmetries

AI development benefits from huge public subsidies from states all over the world. As an illustration, Putin says the nation that leads in AI “will be the ruler of the world”. Chinese AI development reflects and maintains China’s political point of view on citizens’ privacy. As the social scientist Winner stated, technical artifacts also have political qualities.

❌ Economic asymmetries

AI is mostly developed by private companies in competitive markets. Economic power relations of big tech companies play an important role in some AI incidents. The recent scandal with Facebook is a great example of a harmful AI system deliberately developed to maximize profits. Economic power relations have a great impact on the future development of AI.

❌ Post-colonial asymmetries

Some researchers showed that AI development can reflect and maintain asymmetric relationships between countries and people. For example, the sociologist Antonion Casilli described how the data labeling process in AI is often outsourced to low-income countries. This fastidious task is often badly paid and accepted by workers that live in former colonial countries, thus maintaining structural power relations between countries.

At Giskard, we do not want to fall into the traps of technological solutionism. Structural biases cannot be overcome using only technical methods. We are conscious that AI development is deeply rooted in a social world. We think education and democratic action are key to building technology as a driver of a better world for and by humans.

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance