🔍 What does research tell us about the future of AI Quality?

According to Paleyes (2021), unlike in regular software products where changes only happen in the code, AI systems change along 3 axes: the code, the model, and the data. The model’s behavior evolves in response to the frequent provision of new data. AI is not easily breakable in small unit componentsSome AI properties (e.g., accuracy) only emerge as a combination of different components such as the training data, the learning program, and the learning library. It is hard to break the AI system into smaller components that can be tested in isolation.

Testing AI systems is an active research area. AI is often qualified as non-testable. To sum up the academic literature, here are 3 reasons:

https://lnkd.in/dB77-BvV

1️⃣ AI follows a data-driven programming paradigm

According to Paleyes (2021), unlike in regular software products where changes only happen in the code, AI systems change along 3 axes: the code, the model, and the data. The model’s behavior evolves in response to the frequent provision of new data.

https://lnkd.in/d9SHXWzW

2️⃣ AI is not easily breakable in small unit components

Some AI properties (e.g., accuracy) only emerge as a combination of different components such as the training data, the learning program, and the learning library. It is hard to break the AI system into smaller components that can be tested in isolation.

Zhang et al. (2021)

3️⃣ AI errors are systemic and self-amplifying

AI is characterized by many feedback loops and interactions between components. The output of one model can be ingested into the training base of another. As a result, AI errors can be difficult to identify, measure, and correct.

Giskard AI, we think testing AI systems is a solvable challenge. Want to know more?

✋ Contact us hello@giskard.ai

Cheers,

Jean-Marie John-Mathews

Original post published on LinkedIn on October 8, 2021