Testing & Monitoring framework for ML models

Eliminate risks of biases, performance issues and errors in ML models. In 4 lines of code. From tabular models to LLMs.
Try it in Colab
Documentation
Copy to clipboard
# Get started
pip install "giskard>=2.0.0b" -U
Listed by Gartner
AI Trust, Risk and Security

Detect hidden vulnerabilities in your ML model

Performance bias

Identify discrepancies in accuracy, precision, recall, or other evaluation metrics on specific data slices.

Unrobustness

Detect when your model is sensitive to small perturbations in the input data.

Overconfidence

Avoid incorrect predictions when your model is overly confident.

Data leakage

Detect inflated performance metrics and inaccuracy due to unintentional external data used in your model.

Unethical behavior

Identify perturbations in your model behavior when switching input data (gender, ethnicity...).

Stochasticity

Detect inherent randomness in your model and avoid variations in your results.

Automatically scan your model to find hidden vulnerabilities

In a few lines of code, identify vulnerabilities that may affect the performance of your model, such as data leakage, non-robustness, ethical biases, and overconfidence. Directly in your notebook.

Copy to clipboard
import giskard
‍

model, df = giskard.demo.titanic()
dataset = giskard.
Dataset(df, target="Survived")
model = giskard.
Model(model, model_type="classification")
giskard.
scan(model, dataset)
Copy to clipboard
scan = giskard.scan(model, data)
test_suite = scan.generate_test_suite()
test_suite.run()

Design & run custom ML tests

If the scan found some issues with your model, you can automatically generate a test suite, add your own custom tests, and execute them with our native Python API.

Centralize tests, compare models & share results with your team

Upload the generated test suite to the Giskard server, create reusable test suites, and get ready-made dashboards you can share with the rest of your team. Compare different model versions over time.

Copy to clipboard
pip install "giskard[server]>=2.0.0b" -U
‍
giskard server start

Collaborative and shareable testing

Leverage the power of our open-source community by easily uploading test fixtures, including AI detectors for identifying issues like hate speech, toxicity, and more.

Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.

Benefit from the collective knowledge and resources of our community while enhancing your AI model testing and evaluation processes. (product page)

Collaborative and shareable testing

Leverage the power of our open-source community by easily uploading test fixtures, including AI detectors for identifying issues like hate speech, toxicity, and more.

Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.

Benefit from the collective knowledge and resources of our community while enhancing your AI model testing and evaluation processes. (product page)

Explore our full ML test suite

Save time with our catalog of
reusable test components

Stop wasting time creating new testing components for every new ML use case. Use our ready-made tests, create new ones, and easily add them to your test suite.

Apply data slicing functions for identifying issues like hate speech, toxicity, and more. Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.

Trusted by Modern ML Teams

Emeric Trossat's picture, Head of Data at Webedia

Giskard really speeds up input gatherings and collaboration between data scientists and business stakeholders!

Webedia company's logo
Emeric TROSSAT
Head of Data
Webedia company's logo

Giskard has become a strong partner in our purpose for ethical AI. It delivers the right tools for releasing fair and trustworthy models.

Arnault Gombert
Arnault Gombert
Head of Data Science
Citibeats company logo

Giskard enables to integrate Altaroad business experts' knowledge into our ML models and test them.

Jean Milpied
Jean MILPIED
Data Science Manager
Altaroad company logo

Giskard allows us to easily identify biases in our models and gives us actionable ways to deliver robust models to our customers.

Maximilien Baudry
Maximilien Baudry
Chief Science Officer
Unifai company logo

Join the community

This is an inclusive place where anyone interested in ML Quality is welcome! Leverage best practices from the community, contribute new tests, build the future of AI safety standards.

Ready. Set. Test!
‍Get started today

Get started