The ML Testing Hub

Giskard Hub is a collaborative platform for curating domain-specific tests, comparing & debugging models, and collecting feedback on your ML models.. Ensure AI safety and deploy faster.
Get started

ML Testing and Debugging solution


Continuous validation of your ML models: create and run test suites, and debug your models collaboratively to safely deploy them.

Key features

Collaboration

Create a human feedback loop with re-usable test components and visual debugging dashboards.

Secure & Fast

On-premise, with user authentification and encryption, so your data stays with you.

Compliance with AI regulations

Avoid hefty fines for non-compliance by using our AI Quality Management System.

Save time with our catalog of
reusable test components

Stop wasting time creating new testing components for every new ML use case. Use our ready-made tests, create new ones, and easily add them to your test suite.

Apply data slicing functions for identifying issues like hate speech, toxicity, and more. Access data transformation tools for tasks like rewriting and introducing typos, allowing you to simulate a wide range of real-world scenarios for comprehensive testing.

Supported Data and Model types

Data type

Tabular

LLMs, NLP

coming soon: Computer vision

Model type

Classification

Regression

Data

  • Tabular data
  • Text data
  • Images (coming soon)
  • Audio (coming soon)
  • Time series (coming soon)         

ML model task

  • Classification
  • Regression
  • Text generation
  • Image generation (coming soon)
  • Time series forecasting (coming soon)

Join the community

This is an inclusive place where anyone interested in ML Quality is welcome! Leverage best practices from the community, contribute new tests, build the future of AI safety standards.