Open-source solution for AI Alignment


Detect critical vulnerabilities in your AI model
.png)
Performance bias
Identify discrepancies in accuracy, precision, recall, or other evaluation metrics on specific data slices.
.png)
Unrobustness
Detect when your model is sensitive to small perturbations in the input data.
.png)
Overconfidence
Avoid incorrect predictions when your model is overly confident.
.png)
Data leakage
Detect inflated performance metrics and inaccuracy due to unintentional external data used in your model.
.png)
Unethical behavior
Identify perturbations in your model behavior when switching input data (gender, ethnicity...).
.png)
Stochasticity
Detect inherent randomness in your model and avoid variations in your results.


Automatically scan your model to find vulnerabilities
In a few lines of code, identify vulnerabilities that may affect the performance of your AI models, such as data leakage, non-robustness, ethical biases, and overconfidence. Directly in your notebook.

giskard.scan(my_model, my_dataset)
test_suite = scan.generate_test_suite()
test_suite.run()
Generate and run your test suite
If the scan found some issues with your model, we can automatically generate a test suite or let you create your own custom tests providing you with a list of both passed and failed tests.
Centralize testing in a collaborative Hub
Upload the generated test suite to the Giskard server, create reusable test suites, and get ready-made dashboards you can share with the rest of your team. Compare different model versions over time.

giskard server start
Collaborative and shareable testing
Leverage the power of our open-source community by easily uploading test fixtures, including AI detectors for identifying issues like hate speech, toxicity, and more.
Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.
Benefit from the collective knowledge and resources of our community while enhancing your AI model testing and evaluation processes. (product page)

Collaborative and shareable testing
Leverage the power of our open-source community by easily uploading test fixtures, including AI detectors for identifying issues like hate speech, toxicity, and more.
Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.
Benefit from the collective knowledge and resources of our community while enhancing your AI model testing and evaluation processes. (product page)

Reuse tests from our ready-made catalog

Stop wasting time creating new testing artifacts for every new use case. Use our ready-made tests, create new ones, and easily add them to your test suite.
Apply data slicing functions for identifying issues like hate speech, toxicity, and more. Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.
.webp)

All resources
Thought leadership articles about AI Quality: Risk Management, Robustness, Efficiency, Reliability & Ethics
Ready. Set. Test!
Get started today
Get started