G

All Knowledge

The Giskard hub

Giskard team at DEFCON31
Blog

AI Safety at DEFCON 31: Red Teaming for Large Language Models (LLMs)

DEFCON, one of the world's premier hacker conventions, this year saw a unique focus at the AI Village: red teaming of Large Language Models (LLMs). Instead of conventional hacking, participants were challenged to use words to uncover AI vulnerabilities. The Giskard team was fortunate to attend, witnessing firsthand the event's emphasis on understanding and addressing potential AI risks.

Blanca Rivera Campos
Blanca Rivera Campos
View post
Giskard at FOSDEM 2023
Blog

FOSDEM 2023: Presentation on CI/CD for ML and How to test ML models?

In this talk, we explain why testing ML models is an important and difficult problem. Then we explain, using concrete examples, how Giskard helps ML Engineers deploy their AI systems into production safely by (1) designing fairness & robustness tests and (2) integrating them in a CI/CD pipeline.

Alex Combessie
Alex Combessie
View post
Presentation bias
Blog

Where do biases in ML come from? #7 📚 Presentation

We explain presentation bias, a negative effect present in almost all ML systems with User Interfaces (UI)

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
A shift
Blog

Where do biases in ML come from? #6 🐝 Emergent bias

Emergent biases result from the use of AI / ML across unanticipated contexts. It introduces risk when the context shifts.

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Raised hands
Blog

Where do biases in ML come from? #5 🗼 Structural bias

Social, political, economic, and post-colonial asymmetries introduce risk to AI / ML development

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Orange picking
Blog

Where do biases in ML come from? #4 📊 Selection

Selection bias happens when your data is not representative of the situation to analyze, introducing risk to AI / ML systems

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Ruler to measure
Blog

Where do biases in ML come from? #3 📏 Measurement

Machine Learning systems are particularly sensitive to measurement bias. Calibrate your AI / ML models to avoid that risk.

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Variables crossing
Blog

Where do biases in ML come from? #2 ❌ Exclusion

What happens when your AI / ML model is missing important variables? The risks of endogenous and exogenous exclusion bias.

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Searching for bias in ML
Blog

Where do biases in ML come from? #1 👉 Introduction

Research Literature review: A Survey on Bias and Fairness in Machine Learning

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Trust in AI systems
Blog

8 reasons why you need Quality Testing for AI

Understand why Quality Assurance for AI is the need of the hour. Gain competitive advantage from your technological investments in ML systems.

Alex Combessie
Alex Combessie
View post
Research literature
Blog

What does research tell us about the future of AI Quality? 💡

We look into the latest research to understand what is the future of AI / ML Testing

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Quality Monitoring Dashboard
Blog

How did the idea of Giskard emerge? #8 👁‍🗨 Monitoring

Monitoring is just a tool: necessary but not sufficient. You need people committed to AI maintenance, processes & tools in case things break down.

Alex Combessie
Alex Combessie
View post
Frances Haugen testifying at the US Senate
Blog

How did the idea of Giskard emerge? #7 👮‍♀️ Regulation

Biases in AI / ML algorithms are avoidable. Regulation will push companies to invest in mitigation strategies.

Alex Combessie
Alex Combessie
View post
Giskard founders: Alex and Jean-Marie
Blog

How did the idea of Giskard emerge? #6 👬 A Founders' story

Find out more about Giskard founders story

Alex Combessie
Alex Combessie
View post
Ai incident database
Blog

How did the idea of Giskard emerge? #5 📉 Reducing risks

Technological innovation such as AI / ML comes with risks. Giskard aims to reduce it.

Alex Combessie
Alex Combessie
View post
Five star quality standards
Blog

How did the idea of Giskard emerge? #4 ✅ Standards

Giskard supports quality standards for AI / ML models. Now is the time to adopt them!

Alex Combessie
Alex Combessie
View post
Recommender System
Blog

How did the idea of Giskard emerge? #3 📰 AI in the media

AI used in recommender systems is posing a serious issue for the media industry and our society

Alex Combessie
Alex Combessie
View post
User interfaces - counting sheeps
Blog

How did the idea of Giskard emerge? #2 🐑 User Interfaces

It is difficult to create interfaces to AI models Even AIs made by tech giants have bugs. With Giskard AI, we want to make it easy to create interfaces for humans to inspect AI models. 🕵️ Do you think interfaces are valuable? If so, what kinds of interfaces do you like?

Alex Combessie
Alex Combessie
View post
Running tests
Blog

How did the idea of Giskard emerge? #1 🤓 The ML Test Score

The ML Test Score include verification tests among 4 categories: Features and Data, Model Development, Infrastructure and Monitoring Tests

Alex Combessie
Alex Combessie
View post