"Captain's Log, Stardate 2022.4. After warping into the Open Source galaxy, the GISKARD has been on the lookout for stars on its GitHub repo. Now we are entering orbit around a planet with a technologically advanced civilization that is interested in testing ML models: the Innovators. ”
It has been yet another amazing & eventful month for the Giskard community. We started to onboard the first users of our product, who are giving us very insightful feedback to improve. Our Quality Assurance platform for AI is now being used on real ML models, across a variety of use cases: fraud detection, chemical property prediction, document topic classification, etc.
Our crew is also growing: we onboarded two more teammates: Princy Pappachan Iakov, data scientist, and Guillaume Quispe, software engineer. They both come with unique experiences in Computer Vision, Natural Language Processing, Reinforcement Learning, and Adversarial Testing. They are helping us accelerate our R&D efforts, in particular our upcoming Automated ML Testing feature.
But before we explain how our user onboarding is going, let’s answer an important question.
🙋 Who cares about AI Quality?
Over the past 6 months, we have interviewed a hundred AI practitioners across industries. The hard truth is that today, a majority of people do not really care about AI quality.
44% of companies have not yet adopted AI (McKinsey, 2021). The ones who started to adopt AI are currently dealing with more pressing issues: building the data pipelines, deploying the first ML models to production, etc.
But what we found out during our research is that there is a minority of people who care deeply about AI Quality.
These people have over a dozen of ML models already in production, serving business-critical use cases. To them, quality is essential: if an ML model does not work properly, in terms of performance or bias, the risks are clear. In order to mitigate these risks, some have started to develop custom internal frameworks to validate, test, and monitor ML models.
They are the Innovators.
🌠 The Great Shift
The world is changing. To quote the great Innovator and Chief AI Scientist Yann LeCun:
In the end, I only trust thorough testing.
You can try to explain to me why a jet engine is reliable.
But in the end, trust will come from looking at crash statistics.
Regulators are also pushing for quality:
Providers of high-risk AI systems shall put a quality management system to ensure compliance.
Article 17, European AI Act
High-risk AI systems encompass not only critical industrial and healthcare applications but also essential private and public services: credit scoring, recruitment, fraud detection, etc.
Companies who do not comply with this regulation expose themselves to a risk of a fine of up to 6% of annual global revenue. This is even higher than the GDPR data protection regulation, which puts the fine at 4%.
While the EU AI Act will be enacted in 2023-2024, we are already seeing its consequences today. This week, the new EU law regulating social networks came out. It is directly inspired by the AI Act. This is expected, since AI algorithms used by social media companies can be considered high-risk to our societies.
What about Standards organizations? They are also working on it:
We are already in contact with the LNE in France and NIST in the US. We want to make it simple and smooth to apply these quality standards to your AI workflow.
👉 How we launched the first community for AI Quality Innovators
🥇 Quality before quantity
Giskard is all about quality, so naturally, we decided to focus the early days of our community on developing high-quality relationships with a small group of AI Innovators.
To get started, we reached out to ML experts within our professional network. Then, we asked each of these experts to introduce us to other people within their own ecosystem, to start a snowballing effort.
Right now, we are working in close collaboration with a dozen of Data Scientists and ML Engineers. We call them our Design Partners! 🤝
💖 Become a Giskard Design Partner
As our product is young, working in close collaboration with our first users is very important to identify what to improve, and how we can deliver value. It needs to be a Win-Win scenario!
We framed the benefits & guidelines of this collaboration as follows:
When we say close collaboration, we really mean it. We speak to our design partners on a weekly basis. Every time they hit a blocker or have an idea for a new feature, we jump on a call or on Discord to help and scope the necessary development, whether it is a bug fix or a new feature.
By working with a small number of advanced users, who resonate with our value proposition, we are able to move very quickly and prioritize our roadmap.
If you are interested to join our Design Partner program, drop us a line at email@example.com. We have the bandwidth to onboard more people and are always happy to help! The final entry date is June 2022.
👐 Contributing to Giskard
What is really cool & powerful with Open-Source communities is that we are also reaching people from outside our network. They discover our product organically, try it and directly want to contribute!
Soon, we will write a document to explain how you can contribute. There are many ways to help the community and all are equally valuable: adding new features and integrations, reviewing code, creating tutorials, writing a blog post about Giskard, etc.
First, we want to thank every single of the first 🌟 105 data scientists & engineers 🌟 who starred our repo on GitHub. You are the real stars, our first believers!
Finally, we would like to thank our earliest and most advanced design partners:
- Pierre Girardeau and Jean-Baptiste Juin at Cross Data, for being the first to install Giskard as a managed service on your servers.
- Viphone Rathikoun at L’Oréal, for being such a champion advocating for Giskard to facilitate collaboration between AI and Business teams.
- Nicolas Nguyen Khoa Man and Emeric Trossat at Webedia, for being so enthusiastic about some advanced features like automatic testing or data augmentation
- Kevin Vu at Wakam, for pushing the idea to actually sell insurance against AI risks
- Cyril Le Mat at Cornerstone, for sharing your vision to make ML models production-ready.
📍 What's next?
Our team is busy working on our next big feature: Automated ML Testing! This will help ML Engineers build automatic test suites to ensure ML model performance, robustness & ethics.
Here is a teaser of what it currently looks like:
This feature will help bridge the gap between the sandbox world of ML development and the real world of production. Bonus: you will be able to integrate it directly into your CI/CD pipelines!
It is currently available in Beta. Some of our design partners are already testing it. We are a few weeks away from public release, as we refine our suite of pre-defined tests and our automated testing engine.
We’ve been also listening to the feedback of our design partners about how to improve the existing module - AI Inspect. New features will also be rolling out with the next release.
Drop us a line at firstname.lastname@example.org if you would like to get a demo!
Lastly, we are working on a brand new website to better engage our community, with use case examples, technical content, and product highlights.
Thanks again to all 368 of you for subscribing to this newsletter. If you have any questions or ideas on how we can improve, we are very interested in your feedback.
Share it around you if you like it!