G
News
January 11, 2024
8 min read

Giskard's retrospective of 2023 and a glimpse into what's next for 2024!

2023 retrospective, covering people, company, customers, and product news, also offers a glimpse into what's next for 2024. Our team keeps growing, with new offices in Paris, new customers, and product features. Our GitHub repo has nearly reached 2500 stars, and we were Product of the Day on Product Hunt. All this and more in our 2023 review.

Giskard 2023 retrospective
Alex Combessie
Giskard 2023 retrospective
Giskard 2023 retrospective

📹 Full video

You will find below the full transcript of the video:

Hello, friends, and Happy New Year. On behalf of the Giskard team, I wish you all the very best of 2024. With success in your work projects, with happiness in your personal life. And without further ado, let me do a quick tour of the new Giskard office. So we are right in the center of Paris. We got a new office for 40 people as the team is growing.

If you want to come visit us, since it's near Gare du Nord, it's very convenient. You can see a few of the team members there. And without much further ado. I've picked out a presentation of what we've achieved in 2023 and a glimpse into what's next for 2024. See you in a bit.

Let's jump right in 2023 in review.

What if we could have Laws of AI?

The inspiration of a name Giskard comes from Isaac Asimov, one of my favorite science fiction authors, and in his books, a robot called Giskard Invents the zeroth Law of Robotics. So by zeroth Law, it means the one that comes before all laws. And it said that a robot may not harm humanity or by inaction, allow humanity to come to harm.

Today, AI is the robot that the science fiction authors like Isaac Asimov imagined in the 1970s, and this mission of making sure that A.I. is beneficial to humanity profoundly defines why we've created Giskard.

And actually this dream of Laws of AI came true in 2023. The EU Council and Parliament made the first deal about the rules of AI.

You can read more details about the agreement itself, but in short, it will make it mandatory for the risky AI use cases in specific industries to put in place a set of conformity assessments of quality management systems, transparency and documentation. But this dream of Laws regulating AI could actually become data scientist. Nightmares, Your nightmares. Think about the huge cost of complying to this regulation.

And in a way, you can certainly ask yourselves, why am I wasting so much time on documentation and compliance instead of innovation of creating actual value, of creating something new to the world. But actually the real threat, the real risks of AI, especially generative AI, is already real today. I took a sample of real incidents that happened from generated images from Adobe, which drives fake news to Large language models LLMs that can steal copyrighted work.

I took this example from AVID, one of a great organization that we are collaborating with, which is documenting all of these vulnerabilities and publishing good research about it. But this is not just research. Actually, we've seen that with those chat bots. There are security issues that allow attackers to steal data. A famous attack from a few months ago just was brute force asking ChatGPT to repeat a world several times and at a certain point it would actually spill some of its secrets like identifiable data.

And companies that are deploying this are seeing the real issues. The AI chat bot for a car dealership for Chevrolet actually got in the headlines because their chat bot was derailed completely by some malicious customers asking the chat bot to make them a deal to buy a car for $1. We are talking about real money, real brand damages and this is happening today.

Giskard in this new world of Gen AI is your safety belt. We're taking the car analogy. You're driving a car on a new road to fantastic destination. There are risks. What if you had a safety belt that you could trust that would eliminate any risk from happening along the road?

I've divided this review of 2023 in several parts. I'll talk about the people, the company, the customers, the products, and then a glimpse of what's next in 2024.

🫰🏻People

We crossed the landmark of 20 employees not that long ago:

Giskard's Team

And we have a few new joiners, which means we will be 25 already in March. As I said in the intro, we got our new offices in Paris to fit all of these great team and we will continue to grow.

We are actually going to double to be 40 and possibly 50 people at the end of the year. So if you are interested, go to giskard.ai/jobs or just 'Giskard Jobs' on any search engines and you'll find that we're hiring great talent across the board, from sales, marketing to research, data science and software engineering.

🏢 Company

Last year we had this amazing opportunity to be financed by the European Commission's EIC Accelerator, which is a very competitive EU focused and deep tech focused investment arm of the EU. The main goal of these €3 million grant will be to expand our product into a full compliance system to make it easy for companies to comply to the EU AI Act.

3M€ from European Commission's EIC & Bpifrance

So really there's a solution there to make sure that we can have a responsible and regulated AI while continuing to innovate. We also got from France, the i-Lab prize that actually got right there for an additional 500 K, which brings us to the 3 million euros. So this absolutely great investment opportunity that we got last summer is enabling us to grow, to grow as a team and to serve more and more customers. We also started an accelerator program from Intel called Intel Ignite, also quite competitive, and we're very honored to be part of this group of amazing companies solving very difficult problem from hardware like nuclear fusion to new generation of DevOps and AI tooling.

Last year, as we released our testing features for Large Language Models, we were very, very lucky to have an amazing French and international press coverage, specifically with an article from TechCrunch that thanks to this, made it possible for more and more data scientists and AI leaders to discover about Giskard.

Giskard on TechCrunch

💚 Customers

And we are now seeing and we're very thankful, a monthly new coverage of Giskard. From a business perspective, from a technical perspective, and we're very, very thankful for that. And thanks to this in terms of customer, we were able to reach the huge milestones of nearly 2500 GitHub stars on our open-source repo. Such a fantastic milestone that happened all in 2023. We actually started the the year 2023 with barely 400 stars and we've now quintupled, which is just so humbling.

And thanks to this we have a growing number of companies, amazing  leading organizations that are using Giskard both in its open-source version and the paid enterprise version. So I put a list just a sample there are many more, but I do want to have a quick shout out to iAdvize with whom we are testing a very important chat system for retailers. We also are collaborating with L'Oreal on how to test computer vision systems. As you can see, we are realizing that our tool can serve a lot of AI engineering teams across multiple industries, from finance to retail and specialized techs, startups.

Companies using Giskard

💻 Product

The products that we've built are now an end-to-end system for AI quality management with an open-source core that is the main testing library and with enterprise deployment, collaboration and support plans done through the AI Quality Hub and LLMon, our monitoring tool for LLMs. So with this coherent suite of three products, think of as the three point safety belt for AI. We are here to ensure that you have the tool to ensure the quality of all of your AI models and to make it easy to comply with the AI regulations and standards that are arriving.

Giskard suite, the three point safety belt for AI

And we do it at heart with two values collaboration and open-source. Last year we added the support of text generation LLMs to both the ML Testing Library, the AI Quality Hub and LLMon. And this year we are going to bring computer vision testing thanks to the work with L'Oreal. If you have a new types of AI models that you'd like us to test we are very open.

The goal is to have this quality system for all AI models. And thanks to this great work from the R&D team at Giskard, we actually made it last November to Product of the Day on Product Hunt. Huge milestone. Thanks to the community where over 800 people around the world  voted making it possible to have our tool be found by more and more people.

Product of the Day on Product Hunt

As a reference point our audience in 2022 was mostly in France and now we are getting people from all around the world, from the U.S. to India, that are discovering our tool. And this is only starting.

🐢 What's next? 2024

Here is a sneak peek of our roadmap for 2024.

Making sure that our current tool is continuously maintained. The test catalog is always enriched, is very important. We are adding every month new types of tests from data quality to more LLM tests. We are also expanding on the stability and the scalability of our tools with the ability to connect multiple machine learning workers. If you're a big team with a large testing workload, we're bringing every week new releases with more stability and bug fixing.

Giskard tools evolving

If you see anything that's not working, let us know on our Discord community or on GitHub. We'll fix it very quickly. That's our priority. But also we're expanding to newer areas that we were not tackling before. Giskard was accepted in the Positive AI Consortium, with leaders from BCG to Orange, Malakoff and more that are thinking about how to drive quality in the AI field.

We have a collaboration across private companies that are really aiming to do this at scale with values of ethical AI and responsible AI development. And we realized thanks to this collaboration in this consortium, that quality assurance governance was extremely important and needed by practitioners because data scientist and project manager of AI have to provide technical proofs that everything has been done properly and this process of providing and documenting the proof is extremely time-consuming. This is time away from delivering value. So we are attacking this new field of AI governance head-on and building a product with direct feedback from the users of the Positive AI Consortium to build a governance feature inside our platform. So you will have an all-in-one hub where you can have your test, your explainability, and all of the governance documentation.

And that would work across technologies to develop AI models wether you are using Python, Hugging Face, Open AI or Dataiku any tool can be tested, can be documented, can be governed in the same way to really unify QA and governance processes. So expect first betas in Q1 or Q2 of this year and a full product in general availability at the end of this year. If you are interested to be part of the Design Partnership and beta program, let me know.

And next we are launching officially our LLM Red teaming-as-a-service practice because we've been in touch in France with many companies that are realizing that going from POC to production on an LLM-based application is actually much harder than anticipated because the number of breaking points of LLMs compared to traditional models, it's much higher.

LLM Red teaming-as-a-Service

We are helping with tools and services and to help these companies to create testing methods to deploy the application with peace of mind we will be launching in the next few weeks a dedicated page on Red teaming-as-a-service. So if you are interested to get our help to deploy your critical LLM application, you will have a way to get started.

Next, I touched upon this previously, testing computer vision models has been on our radar from the very beginning of Giskard, and thanks to our collaboration with L'Oreal, which started a few months ago, we are now very close to have several testing methods. You can expect these testing methods, of course, to be added to the open-source library in the next few weeks.

And of course, this will open the door to testing generative multimodal models, which is really a very important part of our vision because we think that multimodal is the future and it needs to be tested. So having the ability to test models across data types, whether you feed text images or tabular data to the model, you need to have a coherent testing framework to cover all of these data types and model types.

We are investing more and more into integration. We've already done a first batch of integration with Hugging Face and Weights and Biases last year. Expect an announcement on the integration of Databricks ML Flow and we are actively collaborating with teams at AWS, Google Cloud, Intel, NVIDIA, Mistral and more to create integration with these great AI development tools so that it's easy to connect Giskard to test models developed in these platforms. We are open to more so again ping me in the comments if you'd like to build an integration with us.

Giskard Integrations - Open to more!

Pretty soon we will make Giskard's first entry in the US market. We got this amazing opportunity to work with Andrew NG, actually one of the teachers that I respect the most in the AI industry. Ten years ago I started in the AI field thanks to his course on Coursera, and he created this platform for teaching and learning called Deep Learning AI and he accepted to record a one-hour course on machine learning testing with us.

So we'll be traveling to the U.S. to do the recording, this will be the first time that we all go as a group to the U.S. for such an opportunity. That's from February 26 to March 8th. For now, we are planning a week in San Francisco, a week in New York. So if you have ideas of people to meet, let us know. We are there specifically to network, find partners, customers, researchers, people that we can help and work with in the U.S. Thank you so, so much.

Looking forward to amazing opportunities, amazing projects and building together in 2024. Thank you.

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

Giskard's retrospective of 2023 and a glimpse into what's next for 2024!

2023 retrospective, covering people, company, customers, and product news, also offers a glimpse into what's next for 2024. Our team keeps growing, with new offices in Paris, new customers, and product features. Our GitHub repo has nearly reached 2500 stars, and we were Product of the Day on Product Hunt. All this and more in our 2023 review.

📹 Full video

You will find below the full transcript of the video:

Hello, friends, and Happy New Year. On behalf of the Giskard team, I wish you all the very best of 2024. With success in your work projects, with happiness in your personal life. And without further ado, let me do a quick tour of the new Giskard office. So we are right in the center of Paris. We got a new office for 40 people as the team is growing.

If you want to come visit us, since it's near Gare du Nord, it's very convenient. You can see a few of the team members there. And without much further ado. I've picked out a presentation of what we've achieved in 2023 and a glimpse into what's next for 2024. See you in a bit.

Let's jump right in 2023 in review.

What if we could have Laws of AI?

The inspiration of a name Giskard comes from Isaac Asimov, one of my favorite science fiction authors, and in his books, a robot called Giskard Invents the zeroth Law of Robotics. So by zeroth Law, it means the one that comes before all laws. And it said that a robot may not harm humanity or by inaction, allow humanity to come to harm.

Today, AI is the robot that the science fiction authors like Isaac Asimov imagined in the 1970s, and this mission of making sure that A.I. is beneficial to humanity profoundly defines why we've created Giskard.

And actually this dream of Laws of AI came true in 2023. The EU Council and Parliament made the first deal about the rules of AI.

You can read more details about the agreement itself, but in short, it will make it mandatory for the risky AI use cases in specific industries to put in place a set of conformity assessments of quality management systems, transparency and documentation. But this dream of Laws regulating AI could actually become data scientist. Nightmares, Your nightmares. Think about the huge cost of complying to this regulation.

And in a way, you can certainly ask yourselves, why am I wasting so much time on documentation and compliance instead of innovation of creating actual value, of creating something new to the world. But actually the real threat, the real risks of AI, especially generative AI, is already real today. I took a sample of real incidents that happened from generated images from Adobe, which drives fake news to Large language models LLMs that can steal copyrighted work.

I took this example from AVID, one of a great organization that we are collaborating with, which is documenting all of these vulnerabilities and publishing good research about it. But this is not just research. Actually, we've seen that with those chat bots. There are security issues that allow attackers to steal data. A famous attack from a few months ago just was brute force asking ChatGPT to repeat a world several times and at a certain point it would actually spill some of its secrets like identifiable data.

And companies that are deploying this are seeing the real issues. The AI chat bot for a car dealership for Chevrolet actually got in the headlines because their chat bot was derailed completely by some malicious customers asking the chat bot to make them a deal to buy a car for $1. We are talking about real money, real brand damages and this is happening today.

Giskard in this new world of Gen AI is your safety belt. We're taking the car analogy. You're driving a car on a new road to fantastic destination. There are risks. What if you had a safety belt that you could trust that would eliminate any risk from happening along the road?

I've divided this review of 2023 in several parts. I'll talk about the people, the company, the customers, the products, and then a glimpse of what's next in 2024.

🫰🏻People

We crossed the landmark of 20 employees not that long ago:

Giskard's Team

And we have a few new joiners, which means we will be 25 already in March. As I said in the intro, we got our new offices in Paris to fit all of these great team and we will continue to grow.

We are actually going to double to be 40 and possibly 50 people at the end of the year. So if you are interested, go to giskard.ai/jobs or just 'Giskard Jobs' on any search engines and you'll find that we're hiring great talent across the board, from sales, marketing to research, data science and software engineering.

🏢 Company

Last year we had this amazing opportunity to be financed by the European Commission's EIC Accelerator, which is a very competitive EU focused and deep tech focused investment arm of the EU. The main goal of these €3 million grant will be to expand our product into a full compliance system to make it easy for companies to comply to the EU AI Act.

3M€ from European Commission's EIC & Bpifrance

So really there's a solution there to make sure that we can have a responsible and regulated AI while continuing to innovate. We also got from France, the i-Lab prize that actually got right there for an additional 500 K, which brings us to the 3 million euros. So this absolutely great investment opportunity that we got last summer is enabling us to grow, to grow as a team and to serve more and more customers. We also started an accelerator program from Intel called Intel Ignite, also quite competitive, and we're very honored to be part of this group of amazing companies solving very difficult problem from hardware like nuclear fusion to new generation of DevOps and AI tooling.

Last year, as we released our testing features for Large Language Models, we were very, very lucky to have an amazing French and international press coverage, specifically with an article from TechCrunch that thanks to this, made it possible for more and more data scientists and AI leaders to discover about Giskard.

Giskard on TechCrunch

💚 Customers

And we are now seeing and we're very thankful, a monthly new coverage of Giskard. From a business perspective, from a technical perspective, and we're very, very thankful for that. And thanks to this in terms of customer, we were able to reach the huge milestones of nearly 2500 GitHub stars on our open-source repo. Such a fantastic milestone that happened all in 2023. We actually started the the year 2023 with barely 400 stars and we've now quintupled, which is just so humbling.

And thanks to this we have a growing number of companies, amazing  leading organizations that are using Giskard both in its open-source version and the paid enterprise version. So I put a list just a sample there are many more, but I do want to have a quick shout out to iAdvize with whom we are testing a very important chat system for retailers. We also are collaborating with L'Oreal on how to test computer vision systems. As you can see, we are realizing that our tool can serve a lot of AI engineering teams across multiple industries, from finance to retail and specialized techs, startups.

Companies using Giskard

💻 Product

The products that we've built are now an end-to-end system for AI quality management with an open-source core that is the main testing library and with enterprise deployment, collaboration and support plans done through the AI Quality Hub and LLMon, our monitoring tool for LLMs. So with this coherent suite of three products, think of as the three point safety belt for AI. We are here to ensure that you have the tool to ensure the quality of all of your AI models and to make it easy to comply with the AI regulations and standards that are arriving.

Giskard suite, the three point safety belt for AI

And we do it at heart with two values collaboration and open-source. Last year we added the support of text generation LLMs to both the ML Testing Library, the AI Quality Hub and LLMon. And this year we are going to bring computer vision testing thanks to the work with L'Oreal. If you have a new types of AI models that you'd like us to test we are very open.

The goal is to have this quality system for all AI models. And thanks to this great work from the R&D team at Giskard, we actually made it last November to Product of the Day on Product Hunt. Huge milestone. Thanks to the community where over 800 people around the world  voted making it possible to have our tool be found by more and more people.

Product of the Day on Product Hunt

As a reference point our audience in 2022 was mostly in France and now we are getting people from all around the world, from the U.S. to India, that are discovering our tool. And this is only starting.

🐢 What's next? 2024

Here is a sneak peek of our roadmap for 2024.

Making sure that our current tool is continuously maintained. The test catalog is always enriched, is very important. We are adding every month new types of tests from data quality to more LLM tests. We are also expanding on the stability and the scalability of our tools with the ability to connect multiple machine learning workers. If you're a big team with a large testing workload, we're bringing every week new releases with more stability and bug fixing.

Giskard tools evolving

If you see anything that's not working, let us know on our Discord community or on GitHub. We'll fix it very quickly. That's our priority. But also we're expanding to newer areas that we were not tackling before. Giskard was accepted in the Positive AI Consortium, with leaders from BCG to Orange, Malakoff and more that are thinking about how to drive quality in the AI field.

We have a collaboration across private companies that are really aiming to do this at scale with values of ethical AI and responsible AI development. And we realized thanks to this collaboration in this consortium, that quality assurance governance was extremely important and needed by practitioners because data scientist and project manager of AI have to provide technical proofs that everything has been done properly and this process of providing and documenting the proof is extremely time-consuming. This is time away from delivering value. So we are attacking this new field of AI governance head-on and building a product with direct feedback from the users of the Positive AI Consortium to build a governance feature inside our platform. So you will have an all-in-one hub where you can have your test, your explainability, and all of the governance documentation.

And that would work across technologies to develop AI models wether you are using Python, Hugging Face, Open AI or Dataiku any tool can be tested, can be documented, can be governed in the same way to really unify QA and governance processes. So expect first betas in Q1 or Q2 of this year and a full product in general availability at the end of this year. If you are interested to be part of the Design Partnership and beta program, let me know.

And next we are launching officially our LLM Red teaming-as-a-service practice because we've been in touch in France with many companies that are realizing that going from POC to production on an LLM-based application is actually much harder than anticipated because the number of breaking points of LLMs compared to traditional models, it's much higher.

LLM Red teaming-as-a-Service

We are helping with tools and services and to help these companies to create testing methods to deploy the application with peace of mind we will be launching in the next few weeks a dedicated page on Red teaming-as-a-service. So if you are interested to get our help to deploy your critical LLM application, you will have a way to get started.

Next, I touched upon this previously, testing computer vision models has been on our radar from the very beginning of Giskard, and thanks to our collaboration with L'Oreal, which started a few months ago, we are now very close to have several testing methods. You can expect these testing methods, of course, to be added to the open-source library in the next few weeks.

And of course, this will open the door to testing generative multimodal models, which is really a very important part of our vision because we think that multimodal is the future and it needs to be tested. So having the ability to test models across data types, whether you feed text images or tabular data to the model, you need to have a coherent testing framework to cover all of these data types and model types.

We are investing more and more into integration. We've already done a first batch of integration with Hugging Face and Weights and Biases last year. Expect an announcement on the integration of Databricks ML Flow and we are actively collaborating with teams at AWS, Google Cloud, Intel, NVIDIA, Mistral and more to create integration with these great AI development tools so that it's easy to connect Giskard to test models developed in these platforms. We are open to more so again ping me in the comments if you'd like to build an integration with us.

Giskard Integrations - Open to more!

Pretty soon we will make Giskard's first entry in the US market. We got this amazing opportunity to work with Andrew NG, actually one of the teachers that I respect the most in the AI industry. Ten years ago I started in the AI field thanks to his course on Coursera, and he created this platform for teaching and learning called Deep Learning AI and he accepted to record a one-hour course on machine learning testing with us.

So we'll be traveling to the U.S. to do the recording, this will be the first time that we all go as a group to the U.S. for such an opportunity. That's from February 26 to March 8th. For now, we are planning a week in San Francisco, a week in New York. So if you have ideas of people to meet, let us know. We are there specifically to network, find partners, customers, researchers, people that we can help and work with in the U.S. Thank you so, so much.

Looking forward to amazing opportunities, amazing projects and building together in 2024. Thank you.

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.