G
News
April 12, 2023
4 min read

🔥 The safest way to use ChatGPT... and other LLMs

With Giskard’s SafeGPT you can say goodbye to errors, biases & privacy issues in LLMs. Its features include an easy-to-use browser extension and a monitoring dashboard (for ChatGPT users), and a ready-made and extensible quality assurance platform for debugging any LLM (for LLM developers)

SafeGPT - The safest way to use ChatGPT and other LLMs
Blanca Rivera Campos
SafeGPT - The safest way to use ChatGPT and other LLMs
SafeGPT - The safest way to use ChatGPT and other LLMs

Hi there,

The Giskard team hopes you're having a good week! This month we have the pleasure to introduce SafeGPT.

This new solution is the culmination of weeks of hard work by the Giskard team. As a company that advocates for responsible AI, we have actively listened to our users and the community to analyze the risks associated with Large Language Models (LLMs) like ChatGPT.

While we are all excited about the LLM revolution, we acknowledge the safety risks involved. At Giskard, we are committed to helping mitigate these risks and ensure that AI serves the economic performance of companies while respecting the rights of users and citizens.

It is also crucial to have independent third-party evaluations to assess the safety of generative language models. These evaluations, conducted by separate entities from the developers of LLMs, provide important checks and balances to ensure responsible regulation of the system.

😍 We are proud to offer this new tool to our valued users. You can get now early access to our solution to safely deploy Large Language Models like ChatGPT.

Join the waitlist 🌟

In this news, you will get a sneak peek of SafeGPT, which will offer:

  • for ChatGPT users: an easy-to-use browser extension and a monitoring dashboard to prevent errors, biases, and privacy issues,
  • for LLM developers: a ready-made and extensible quality assurance platform for debugging any LLM.

✅ Say goodbye to errors, biases & privacy issues in LLMs

With our innovative new solution, we're introducing the safest way to use LLMs like ChatGPT. We have developed a browser extension that allows you to identify wrong answers, reliability issues, and ethical biases for any LLM.

To enable continuous monitoring of your LLM results, we've also created a comprehensive dashboard that allows you to easily track your LLM system's performance at a glance. Our dashboard includes alerting & root-cause analysis capabilities, and you can filter by queries, topics and users.

Our browser extension checks for errors and biases in LLMs

🧠 Quality Assurance platform for LLMs

In addition, our Giskard platform allows you to debug any Large Language Model. You can easily create and execute custom tests, and compare the output quality of different LLM providers.

With Giskard, you'll have the ability to run tests from our catalog, including reliability, biases, performance, and fairness, or upload your own tests. Furthermore, you can easily diagnose errors and debug your models using our visual interface.  

Giskard's Quality Assurance platform for LLMs

🤔 What features will be included in SafeGPT?

We protect against major risks, including:

📉 Hallucinations: Factual errors made by LLMs lead to serious distrust in the model, and can have severe financial implications.

🔐 Privacy issues: LLMs can leak private and sensitive data that you want to protect.

⚖  Ethical biases: LLMs can generate toxic or biased content, harming society & posing reputational risks.

🔍 Robustness issues: Answers can vary depending on LLM providers. SafeGPT compares the answers to check their robustness.

and, what else? 👀

SafeGPT will be:

  • Compatible with any LLM including ChatGPT
  • Using real, and up-to-date data
  • Based on State-of-the-Art research
  • Ready to scale
  • Secure & trusted
  • Fast support

🤩 Don't miss out and get early access to SafeGPT

How can I get access to SafeGPT?

👉 Just join the waitlist!

Share it with your friends and colleagues to move up the list!

Find out more about SafeGPT

🗺 More to come

After joining the waitlist, you will get notified shortly with all the details.

We'll be glad to have you onboard 🤗


Thank you so much, and see you soon!

The Giskard team 🐢

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance