Open-source solution for AI Alignment

Scan AI models to detect risks of biases, performance issues and errors. In 4 lines of code. From tabular models to LLMs.
Get started
Copy to clipboard
# Get started
pip install giskard==2.0.0b2
Listed by Gartner
AI Trust, Risk and Security

Detect critical vulnerabilities in your AI model

Performance bias

Identify discrepancies in accuracy, precision, recall, or other evaluation metrics on specific data slices.

Unrobustness

Detect when your model is sensitive to small perturbations in the input data.

Overconfidence

Avoid incorrect predictions when your model is overly confident.

Data leakage

Detect inflated performance metrics and inaccuracy due to unintentional external data used in your model.

Unethical behavior

Identify perturbations in your model behavior when switching input data (gender, ethnicity...).

Stochasticity

Detect inherent randomness in your model and avoid variations in your results.

Automatically scan your model to find vulnerabilities

In a few lines of code, identify vulnerabilities that may affect the performance of your AI models, such as data leakage, non-robustness, ethical biases, and overconfidence. Directly in your notebook.

Copy to clipboard
import giskard
giskard.scan(my_model, my_dataset)
Copy to clipboard
scan = giskard.scan(model, data)
test_suite = scan.generate_test_suite()
test_suite.run()

Generate and run your test suite

If the scan found some issues with your model, we can automatically generate a test suite or let you create your own custom tests providing you with a list of both passed and failed tests.

Centralize testing in a collaborative Hub

Upload the generated test suite to the Giskard server, create reusable test suites, and get ready-made dashboards you can share with the rest of your team. Compare different model versions over time.

Copy to clipboard
pip install 'giskard[server]==2.0.0b2'

giskard server start

Collaborative and shareable testing

Leverage the power of our open-source community by easily uploading test fixtures, including AI detectors for identifying issues like hate speech, toxicity, and more.

Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.

Benefit from the collective knowledge and resources of our community while enhancing your AI model testing and evaluation processes. (product page)

Collaborative and shareable testing

Leverage the power of our open-source community by easily uploading test fixtures, including AI detectors for identifying issues like hate speech, toxicity, and more.

Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.

Benefit from the collective knowledge and resources of our community while enhancing your AI model testing and evaluation processes. (product page)

Explore our full AI test suite

Reuse tests from our ready-made catalog

Stop wasting time creating new testing artifacts for every new use case. Use our ready-made tests, create new ones, and easily add them to your test suite.

Apply data slicing functions for identifying issues like hate speech, toxicity, and more. Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.

Trusted by Modern ML Teams

Emeric Trossat's picture, Head of Data at Webedia

Giskard really speeds up input gatherings and collaboration between data scientists and business stakeholders!

Webedia company's logo
Emeric TROSSAT
Head of Data
Webedia company's logo

Giskard has become a strong partner in our purpose for ethical AI. It delivers the right tools for releasing fair and trustworthy models.

Arnault Gombert
Arnault Gombert
Head of Data Science
Citibeats company logo

Giskard enables to integrate Altaroad business experts' knowledge into our ML models and test them.

Jean Milpied
Jean MILPIED
Data Science Manager
Altaroad company logo

Giskard allows us to easily identify biases in our models and gives us actionable ways to deliver robust models to our customers.

Maximilien Baudry
Maximilien Baudry
Chief Science Officer
Unifai company logo

Join the community

This is an inclusive place where anyone interested in AI Quality is welcome! Leverage best practices from the community, contribute new tests, build the future of AI safety standards.

Ready. Set. Test!
Get started today

Get started