Knowledge
Blog
April 12, 2023
5
mn
read
Blanca Rivera Campos

🔥 The safest way to use ChatGPT... and other LLMs

With Giskard’s SafeGPT you can say goodbye to errors, biases & privacy issues in LLMs. Its features include an easy-to-use browser extension and a monitoring dashboard (for ChatGPT users), and a ready-made and extensible quality assurance platform for debugging any LLM (for LLM developers)
SafeGPT - The safest way to use ChatGPT and other LLMs

Hi there,

The Giskard team hopes you're having a good week! This month we have the pleasure to introduce SafeGPT.

This new solution is the culmination of weeks of hard work by the Giskard team. As a company that advocates for responsible AI, we have actively listened to our users and the community to analyze the risks associated with Large Language Models (LLMs) like ChatGPT.

While we are all excited about the LLM revolution, we acknowledge the safety risks involved. At Giskard, we are committed to helping mitigate these risks and ensure that AI serves the economic performance of companies while respecting the rights of users and citizens.

It is also crucial to have independent third-party evaluations to assess the safety of generative language models. These evaluations, conducted by separate entities from the developers of LLMs, provide important checks and balances to ensure responsible regulation of the system.

😍 We are proud to offer this new tool to our valued users. You can get now early access to our solution to safely deploy Large Language Models like ChatGPT.

Join the waitlist 🌟

In this news, you will get a sneak peek of SafeGPT, which will offer:

  • for ChatGPT users: an easy-to-use browser extension and a monitoring dashboard to prevent errors, biases, and privacy issues,
  • for LLM developers: a ready-made and extensible quality assurance platform for debugging any LLM.

✅ Say goodbye to errors, biases & privacy issues in LLMs

With our innovative new solution, we're introducing the safest way to use LLMs like ChatGPT. We have developed a browser extension that allows you to identify wrong answers, reliability issues, and ethical biases for any LLM.

To enable continuous monitoring of your LLM results, we've also created a comprehensive dashboard that allows you to easily track your LLM system's performance at a glance. Our dashboard includes alerting & root-cause analysis capabilities, and you can filter by queries, topics and users.

Our browser extension checks for errors and biases in LLMs

🧠 Quality Assurance platform for LLMs

In addition, our Giskard platform allows you to debug any Large Language Model. You can easily create and execute custom tests, and compare the output quality of different LLM providers.

With Giskard, you'll have the ability to run tests from our catalog, including reliability, biases, performance, and fairness, or upload your own tests. Furthermore, you can easily diagnose errors and debug your models using our visual interface.  

Giskard's Quality Assurance platform for LLMs

🤔 What features will be included in SafeGPT?

We protect against major risks, including:

📉 Hallucinations: Factual errors made by LLMs lead to serious distrust in the model, and can have severe financial implications.

🔐 Privacy issues: LLMs can leak private and sensitive data that you want to protect.

⚖  Ethical biases: LLMs can generate toxic or biased content, harming society & posing reputational risks.

🔍 Robustness issues: Answers can vary depending on LLM providers. SafeGPT compares the answers to check their robustness.

and, what else? 👀

SafeGPT will be:

  • Compatible with any LLM including ChatGPT
  • Using real, and up-to-date data
  • Based on State-of-the-Art research
  • Ready to scale
  • Secure & trusted
  • Fast support

🤩 Don't miss out and get early access to SafeGPT

How can I get access to SafeGPT?

👉 Just join the waitlist!

Share it with your friends and colleagues to move up the list!

Find out more about SafeGPT

🗺 More to come

After joining the waitlist, you will get notified shortly with all the details.

We'll be glad to have you onboard 🤗


Thank you so much, and see you soon!

The Giskard team 🐢

Continuously secure LLM agents, preventing hallucinations and security issues.
Book a demo

You will also like

Picture illustrating gender bias generated by DALL-E2

How to test the fairness of ML models? The 80% rule to measure the disparate impact

This article provides a step-by-step guide to detecting ethical bias in AI models, using a customer churn model as an example, using the LightGBM ML library. We show how to calculate the disparate impact metric with respect to gender and age, and demonstrate how to implement this metric as a fairness test within Giskard's open-source ML testing framework.

View post
Our first interview on BFM TV Tech & Co

Exclusive interview: our first television appearance on AI risks & security

This interview of Jean-Marie John-Mathews, co-founder of Giskard, discusses the ethical & security concerns of AI. While AI is not a new thing, recent developments like chatGPT bring a leap in performance that require rethinking how AI has been built. We discuss all the fear and fantasy about AI, how it can pose biases and create industrial incidents. Jean-Marie suggests that protection of AI resides in tests and safeguards to ensure responsible AI.

View post
LLM Scan: Advanced LLM vulnerability detection

1,000 GitHub stars, 3M€, and new LLM scan feature  💫

We've reached an impressive milestone of 1,000 GitHub stars and received strategic funding of 3M€ from the French Public Investment Bank and the European Commission. With this funding, we plan to enhance their Giskard platform, aiding companies in meeting upcoming AI regulations and standards. Moreover, we've upgraded our LLM scan feature to detect even more hidden vulnerabilities.

View post
Stay updated with
the Giskard Newsletter