November 9, 2023
4 min read

Our LLM Testing solution is launching on Product Hunt 🚀

We have just launched Giskard v2, extending the testing capabilities of our library and Hub to Large Language Models. Support our launch on Product Hunt and explore our new integrations with Hugging Face, Weights & Biases, MLFlow, and Dagshub. A big thank you to our community for helping us reach over 1900 stars on GitHub.

Giskard’s LLM Testing solution is launching on Product Hunt
Blanca Rivera Campos
Giskard’s LLM Testing solution is launching on Product Hunt
Giskard’s LLM Testing solution is launching on Product Hunt

Hi there,

The Giskard team hopes you're having a good week! This month we have the pleasure to announce the release of Giskard v2!

Our new release extends our testing capabilities to Large Language Models (LLMs), packed with features and integrations designed to automate vulnerability detection, ease compliance, and foster collaborative efforts in AI quality assurance.

And this new release comes with a big launch on Product Hunt

 🚀 Follow the launch live

If you already have an account and want to support our work, you can of course upvote us 😻

👥 Community news

1900+ Stars on our GitHub repository! 🌟

Special thanks to our amazing community for their support! We've reached an incredible milestone of 1.9k stars on our GitHub repository, and it wouldn't have been possible without you.

We also want to extend our gratitude to the following ML thought leaders and content creators, who made it all possible:

Check out our repository

🔍 Evaluate your LLM application

You can now automatically test your LLMs for real-world vulnerabilities, as we’ve added to our library specialized testing for distinct applications such as Chatbots and RAG. We have also expanded the horizon by introducing support for testing custom LLM APIs, opening doors to a broader spectrum of models not limited to LangChain.

LLM Scan feature

Our team has been working to further improve our LLM scan allowing you now to detect even more potential errors:

✅ Hallucinations & misinformation
✅ Harmful content
✅ Prompt injections
✅ Sensitive information disclosure
✅ Robustness issues
✅ Stereotypes & discrimination

📒 Steps to run it in your notebook

After installing the different libraries, load your model (more info here):

Then, you can scan your model to detect vulnerabilities in a single line of code!

Try it in this notebook

🔧 Test & debug your LLMs at scale

We’ve enhanced our platform which is now called Giskard Hub

To facilitate ML testing at enterprise-scale, we’ve added some new features:

  • Extended capabilities to LLMs.
  • Debug your models thanks to interactive model insights: Get automated insights to fix gaps in your testing, making your test suites more comprehensive.
  • Compare ML models across multiple metrics.
Giskard Hub Debugger

🤗 New integrations

Giskard + Hugging Face

🤗 You can now test & debug your ML models in the Giskard Hub using HuggingFace Spaces

Try it on HF Spaces

🐝 Weight&Biases: Giskard's automated vulnerability detection in conjunction with W&B's tracing tools creates the ideal combination for building and debugging ML apps from tabular to LLMs.

Find out more

🏃 MLFlow: Automatically evaluate your ML model with MLflow's evaluation API by installing Giskard as a plugin.

Find out more

🐶 Dagshub: With its multifaceted platform and free hosted MLflow server, Dagshub enhances your ML debugging experience of Giskard's vulnerability reports.

Find out more

🔥We are now part of Intel Ignite

Giskard is part of Intel's Ignite-European deep tech accelerator, a program renowned for accelerating the growth of deep tech startups. It's an opportunity to grow with expert mentorship, connect with top industry players, and access Intel's global network and technological resources.A huge thank you to Intel for this opportunity to scale our impact!

A huge thank you to Intel for this opportunity to scale our impact!

Find out more

🍿 Video tutorials

In this new tutorial we'll show you how to test your LLM using our open-source Python library and its LLM scan.

Make sure to keep an eye on our YouTube channel as we'll be adding even more video tutorials! We'll be providing guidance on using Giskard, testing your ML models, and how to make ML models robust, reliable & ethical.

🗞️ What are the latest news?

Towards AI Regulation: How Countries are shaping the future of AI

Explore global AI regulation strategies and how nations balance AI's potential with its risks. From the EU AI Act to worldwide perspectives, discover the landscape of AI governance

Read more »

🗺️ What's next?

Giskard v2 is the result of 2 years in the making, involving a group of passionate ML engineers, ethicists, and researchers, and we are excited to show it to the world!

Follow the launch live 🚀

We're also working on expanding our testing capabilities to become the standard of LLM quality assurance, from automated model testing to debugging and monitoring.

Stay tuned for the latest updates!

Thank you so much, and see you soon! ❤️

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance