Knowledge
Blog
March 16, 2023
5
mn
read
Blanca Rivera Campos

Giskard 1.4 is out! What's new in this version? ⭐

With Giskard’s new Slice feature, we introduce the possibility to identify business areas in which your AI models underperform. This will make it easier to debug performance biases or identify spurious correlations. We have also added an export/import feature to share your projects, as well as other minor improvements.
Giskard's turtle slicing some veggies!

We have released a new version 😻

Hi there,

This month we have the pleasure to announce that a new version 1.4 has been officially launched, hooray!

Check out the new version here.

🔪 Slice feature

This new feature will enable you to identify business areas in which your AI models underperform. Identifying these areas is critical because they can lead to catastrophic AI performance issues.

Create your own slice by simply adapting a Python function with the rows you might want to inspect. Once your slice is ready, you can easily ask for feedback to your business expert.

This feature will enable data scientists to debug performance biases or spurious correlations inside their model.

Slice feature - Giskard 1.4

📤 Export and import your projects

Now you can easily export and import your project (metadata, dataset, models, slices). This feature may come in handy for debugging and for sharing your project without having to share your instance.  

Export/import your projects - Giskard 1.4

🐞 Numerous bug fixes and minor improvements

For example:

  • Simplified user management by opening access without login by default.
  • Fixed ML Worker issues related to disconnecting after a while.

🗺 More to come

We’re already working in future releases 🔧

We have also started to work on how to test Generative language models...you will hear soon from us 👀

Thank you so much, and see you soon!

Continuously secure LLM agents, preventing hallucinations and security issues.
Book a demo

You will also like

Picture illustrating gender bias generated by DALL-E2

How to test the fairness of ML models? The 80% rule to measure the disparate impact

This article provides a step-by-step guide to detecting ethical bias in AI models, using a customer churn model as an example, using the LightGBM ML library. We show how to calculate the disparate impact metric with respect to gender and age, and demonstrate how to implement this metric as a fairness test within Giskard's open-source ML testing framework.

View post
Our first interview on BFM TV Tech & Co

Exclusive interview: our first television appearance on AI risks & security

This interview of Jean-Marie John-Mathews, co-founder of Giskard, discusses the ethical & security concerns of AI. While AI is not a new thing, recent developments like chatGPT bring a leap in performance that require rethinking how AI has been built. We discuss all the fear and fantasy about AI, how it can pose biases and create industrial incidents. Jean-Marie suggests that protection of AI resides in tests and safeguards to ensure responsible AI.

View post
LLM Scan: Advanced LLM vulnerability detection

1,000 GitHub stars, 3M€, and new LLM scan feature  💫

We've reached an impressive milestone of 1,000 GitHub stars and received strategic funding of 3M€ from the French Public Investment Bank and the European Commission. With this funding, we plan to enhance their Giskard platform, aiding companies in meeting upcoming AI regulations and standards. Moreover, we've upgraded our LLM scan feature to detect even more hidden vulnerabilities.

View post
Stay updated with
the Giskard Newsletter