G
Blog
September 29, 2021
1 min read

How did the idea of Giskard emerge? #5 📉 Reducing risks

Technological innovation such as AI / ML comes with risks. Giskard aims to reduce it.

Ai incident database
Alex Combessie
Ai incident database
Ai incident database

It is also about the need to reduce risks. ⛑

The last ten years have seen an explosive growth of AI everywhere. We rely on AI for critical parts of our lives: managing our finances, social interactions, health, even driving our car.

But no technological innovation, even AI, comes without a dark side. 🌑

Two years ago, a team of independent researchers and citizens, Partnership on AI, started to document incidents caused by faulty AI models.

This AI Incident Database now contains over 1200 reports. It is collaborative, searchable, and open-source. It encompasses multiple types of incidents: ethical, technical, environmental, etc. 🪲

You will not be surprised to learn that most reports concern AI models made by the GAFAM. They are most advanced with AI deployments and most exposed to the public eye.

If these companies with large teams of ML engineers can still be exposed to such risks, how about the rest of us?

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

How did the idea of Giskard emerge? #5 📉 Reducing risks

Technological innovation such as AI / ML comes with risks. Giskard aims to reduce it.

It is also about the need to reduce risks. ⛑

The last ten years have seen an explosive growth of AI everywhere. We rely on AI for critical parts of our lives: managing our finances, social interactions, health, even driving our car.

But no technological innovation, even AI, comes without a dark side. 🌑

Two years ago, a team of independent researchers and citizens, Partnership on AI, started to document incidents caused by faulty AI models.

This AI Incident Database now contains over 1200 reports. It is collaborative, searchable, and open-source. It encompasses multiple types of incidents: ethical, technical, environmental, etc. 🪲

You will not be surprised to learn that most reports concern AI models made by the GAFAM. They are most advanced with AI deployments and most exposed to the public eye.

If these companies with large teams of ML engineers can still be exposed to such risks, how about the rest of us?

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.