Knowledge
Blog
March 2, 2023
5
mn
read
Alex Combessie

Giskard mentioned as a significant vendor in Gartner's Market Guide for AI Trust, Risk and Security Management

AI poses new trust, risk and security management requirements that conventional controls do not address. This Market Guide defines new capabilities that data and analytics leaders must have to ensure model reliability, trustworthiness and security, and presents representative vendors who implement these functions.
Gartner Research

We are very proud to announce that Giskard, with our Quality Assurance platform for AI models, has been recognized as a representative vendor in Gartner's "Market Guide for AI Trust, Risk and Security Management 2023" (Gartner subscription required). Giskard is one of 14 vendors cited in the report for Explainability/Model Monitoring.

In addition to this, we launched our public profile on Gartner Peer Insights. This platform allows customers to leave reviews and feedback on their experiences with our product, and we are thrilled to showcase the positive comments we have received thus far.

📘 Market Guide for AI Trust, Risk and Security Management

According to Gartner, "Regulatory and ethical requirements drive organizations to responsible use of artificial intelligence (AI). [...] Enhanced controls with sufficient depth and granularity help protect privacy, promote fairness and reduce model bias."

They add an interesting prediction: "By 2026, organizations that operationalize AI transparency, trust and security will see their AI models achieve a 50% improvement in terms of adoption, business goals and user acceptance."

Giskard provides an open-source platform for explainability and ML model testing. It automatically generates tests of ML models using domain knowledge collected through collaborative user feedback. It seeks to ensure ML model fairness, robustness and performance.

We are extremely proud to have been recognized as a significant vendor in Gartner's Market Guide, particularly given that it comes just one year after the launch of our software product. This recognition is a testament to our mission of providing companies with an enterprise-ready platform to test, audit & ensure the quality of all AI models.

It is an honor to be included among the leading providers in this market research report, and we are grateful for the trust and confidence that Gartner's analysts and our customers have placed in us.

👉 Click here to download Gartner's Market Guide for AI Trust, Risk and Security Management 2023.

🌟 Gartner Peer Insights

In addition to our recognition in Gartner's Market Guide, we are excited to announce the launch of our public profile on Gartner Peer Insights.

Gartner Peer Insights is a platform for enterprise software users to share their opinions and experiences with different technology vendors and their products. It provides unbiased, real-world perspectives from verified end-users, enabling technology decision-makers to make more informed buying decisions. Gartner Peer Insights uses a rigorous methodology to ensure the authenticity and quality of reviews, and its content is freely available to the public.

We believe that transparency and customer satisfaction are of the utmost importance, and our profile on Gartner Peer Insights is just one example of our commitment to both. Here are two quotes from our customers' reviews:

"Giskard has proven to be an indispensable partner in our mission in Ethical AI. Its tools are designed specifically to help organizations develop fair and trustworthy models, which is critical to build confidence and trust in AI, Giskard's intuitive interface and actionable solutions make it easy for teams to implement ethical practices into their workflows. Overall, I am highly impressed with the impact Giskard has had on our journey towards ethical AI. And I highly recommend it to anyone looking to make a difference in this field."
Giskard is a great tool for integrating the expertise of our business knowledge experts in our machine learning models. In 30 minutes, I was able to get 70 different feedbacks from business department on my model. These inputs truely heped us to improve the performance and the reliability of our machine learning models. Overall I highly recommend Giskard to anyone interested in getting the best from their machine learning models.

👉 Click here to read all the reviews and comments on Gartner Peer Insights.

Continuously secure LLM agents, preventing hallucinations and security issues.
Book a demo

You will also like

Giskard's co-founders: Andrei Avtomonov (left), Jean-Marie John-Mathews (center), Alex Combessie (right)

Giskard closes its first financing round to expand Enterprise offering

The funding led by Elaia, with participation from Bessemer Venture Partners and notable angel investors, will accelerate the development of an enterprise-ready platform to help companies test, audit & ensure the quality of AI models.

View post
Our first interview on BFM TV Tech & Co

Exclusive interview: our first television appearance on AI risks & security

This interview of Jean-Marie John-Mathews, co-founder of Giskard, discusses the ethical & security concerns of AI. While AI is not a new thing, recent developments like chatGPT bring a leap in performance that require rethinking how AI has been built. We discuss all the fear and fantasy about AI, how it can pose biases and create industrial incidents. Jean-Marie suggests that protection of AI resides in tests and safeguards to ensure responsible AI.

View post
LLM Scan: Advanced LLM vulnerability detection

1,000 GitHub stars, 3M€, and new LLM scan feature  💫

We've reached an impressive milestone of 1,000 GitHub stars and received strategic funding of 3M€ from the French Public Investment Bank and the European Commission. With this funding, we plan to enhance their Giskard platform, aiding companies in meeting upcoming AI regulations and standards. Moreover, we've upgraded our LLM scan feature to detect even more hidden vulnerabilities.

View post
Stay updated with
the Giskard Newsletter