G

The testing
framework for AI models

Eliminate risks of biases, performance issues & security holes in AI models. In <8 lines of code.

From tabular models to LLMs
Listed by Gartner
AI Trust, Risk and Security
# Get started
pip install giskard[llm]
Copy to clipboard
You can copy code here
Giskard - Open-source testing framework for LLMs & ML models | Product Hunt

Trusted by leading AI teams

Why?

AI pipelines are broken

MLOps tools don’t cover the full range of AI risks: robustness, fairness, efficiency, security, etc.
AI/ML teams spend weeks manually creating test cases, writing reports, and enduring endless review meetings.
AI Testing practices are siloed and inconsistent across projects & teams.
Non compliance to the EU AI Act can cost up to 3% of your global revenue.

Enter Giskard:
AI Testing at scale

Automatically detect performance, bias & security issues in AI models.
Stop wasting time on manual testing and writing custom evaluation reports.
Unify AI Testing practices: use standard methodologies for optimal model deployment.
Ensure compliance with the EU AI Act, eliminating risks of fines of 3% of your global revenue.
Giskard Library

Open-source & easy to integrate

In a few lines of code, identify vulnerabilities that may affect the performance, fairness & security of your model. 

Directly in your Python notebook or Integrated Development Environment (IDE).

import giskard
qa_chain = RetrievalQA.from_llm(...)
model = giskard.Model(
   
qa_chain,
   
model_type="text_generation",
    
name="My QA bot",
    
description="An AI assistant that...",
   
feature_names=["question"],
)
giskard.scan(model)
Copy to clipboard
Giskard Hub

Collaborative AI Quality, Security & Compliance

Entreprise platform to test, debug & explain your AI models collaboratively.

Who is it for?

Data scientists
ML Engineers
AI Governance officers
You work on business-critical AI applications.
You spend a lot of time to evaluate AI models.
You want to work with the best Open-source tools.
You’re preparing your company for compliance with the EU AI Act and other AI regulations.
You have high standards of performance, security & safety in AI models.

“Giskard really speeds up input gatherings and collaboration between data scientists and business stakeholders!”

Head of Data
Emeric Trossat

"Giskard really speeds up input gatherings and collaboration between data scientists and business stakeholders!"

Head of Data
Emeric Trossat

"Giskard has become a strong partner in our purpose for ethical AI. It delivers the right tools for releasing fair and trustworthy models."

Head of Data Science
Arnault Gombert

"Giskard enables to integrate Altaroad business experts' knowledge into our ML models and test them."

Jean MILPIED

"Giskard allows us to easily identify biases in our models and gives us actionable ways to deliver robust models to our customers."

Chief Science Officer
Maximilien Baudry

Join the community

Welcome to an inclusive community focused on AI Quality, Security & Compliance! Join us to share best practices, create new tests, and shape the future of AI standards together.

Discord

All those interested in AI Quality, Security & Compliance are welcome!

All resources

Knowledge articles, tutorials and latest news on AI Quality, Security & Compliance

See all
Giskard's LLM Red Teaming

LLM Red Teaming: Detect safety & security breaches in your LLM apps

Introducing our LLM Red Teaming service, designed to enhance the safety and security of your LLM applications. Discover how our team of ML Researchers uses red teaming techniques to identify and address LLM vulnerabilities. Our new service focuses on mitigating risks like misinformation and data leaks by developing comprehensive threat models.

View post
Data Drift Monitoring with Giskard

Data Drift Monitoring with Giskard

Learn how to effectively monitor and manage data drift in machine learning models to maintain accuracy and reliability. This article provides a concise overview of the types of data drift, detection techniques, and strategies for maintaining model performance amidst changing data. It provides data scientists with practical insights into setting up, monitoring, and adjusting models to address data drift, emphasising the importance of ongoing model evaluation and adaptation.

View post
Classification of AI systems under the EU AI Act

EU AI ACT: 8 Takeaways from the Council's Final Approval

The Council of the EU has recently voted unanimously on the final version of the European AI Act. It’s a significant step forward in its efforts to legislate the first AI law in the world. The Act establishes a regulatory framework for the safe use and development of AI, categorizing AI systems according to their associated risk. In the coming months, the text will enter the last stage of the legislative process, where the European Parliament will have a final vote on the AI Act.

View post

Ready. Set. Test!
Get started today

Get started