🤔 How did the idea of Giskard AI emerge? 5/N

This AI Incident Database now contains over 1200 reports. It is collaborative, searchable, and open-source. It encompasses multiple types of incidents: ethical, technical, environmental, etc. 🪲You will not be surprised to learn that most reports concern AI models made by the GAFAM. They are most advanced with AI deployments and most exposed to the public eye.

It is also about the need to reduce risks. ⛑

The last ten years have seen an explosive growth of AI everywhere. We rely on AI for critical parts of our lives: managing our finances, social interactions, health, even driving our car.

But no technological innovation, even AI, comes without a dark side. 🌑

Two years ago, a team of independent researchers and citizens, Partnership on AI, started to document incidents caused by faulty AI models.
https://incidentdatabase.ai/

This AI Incident Database now contains over 1200 reports. It is collaborative, searchable, and open-source. It encompasses multiple types of incidents: ethical, technical, environmental, etc. 🪲

You will not be surprised to learn that most reports concern AI models made by the GAFAM. They are most advanced with AI deployments and most exposed to the public eye.

If these companies with large teams of ML engineers can still be exposed to such risks, how about the rest of us?

In my data science consulting career, I have helped many clients deploy AI to production. I have to confess that I have never felt 100% (even 90%) sure that incidents would never happen. 😓

What’s your opinion? Is it just me? Do you see real risks related to AI in your practice? To which type of risk are you sensitive?

Why not let me know your thoughts in the comments & reshare this post if you like it? ❤️

Cheers,

Alex

Original post published on LinkedIn on September 29, 2021