G

All Knowledge

The Giskard hub

Data Drift Monitoring with Giskard
Tutorials

Data Drift Monitoring with Giskard

Learn how to effectively monitor and manage data drift in machine learning models to maintain accuracy and reliability. This article provides a concise overview of the types of data drift, detection techniques, and strategies for maintaining model performance amidst changing data. It provides data scientists with practical insights into setting up, monitoring, and adjusting models to address data drift, emphasising the importance of ongoing model evaluation and adaptation.

Sagar Thacker
View post
Build and evaluate a Customer Service Chatbot. Image generated by DALL-E
Tutorials

How to find the best Open-Source LLM for your Customer Service Chatbot

Explore how to use open-source Large Language Models (LLMs) to build AI customer service chatbots. We guide you through creating chatbots with LangChain and HuggingFace libraries, and how to evaluate their performance and safety using Giskard's testing framework.

Ashna Ahmad
View post
Tutorials

Mastering ML Model Evaluation with Giskard: From Validation to CI/CD Integration

Learn how to integrate vulnerability scanning, model validation, and CI/CD pipeline optimization to ensure reliability and security of your AI models. Discover best practices, workflow simplification, and techniques to monitor and maintain model integrity. From basic setup to more advanced uses, this article offers invaluable insights to enhance your model development and deployment process.

Sagar Thacker
View post
Tutorials

How to address Machine Learning Bias in a pre-trained HuggingFace text classification model?

Machine learning models, despite their potential, often face issues like biases and performance inconsistencies. As these models find real-world applications, ensuring their robustness becomes paramount. This tutorial explores these challenges, using the Ecommerce Text Classification dataset as a case study. Through this, we highlight key measures and tools, such as Giskard, to boost model performance.

Mostafa Ibrahim
View post
Eliminating bias in Machine Learning predictions
Tutorials

Guide to Model Evaluation: Eliminating Bias in Machine Learning Predictions

Explore our tutorial on model fairness to detect hidden biases in machine learning models. Understand the flaws of traditional evaluation metrics with the help of the Giskard library. Our guide, packed with examples and a step-by-step process, shows you how to tackle data sampling bias and master feature engineering for fairness. Learn to create domain-specific tests and debug your ML models, ensuring they are fair and reliable.

Josiah Adesola - Technical writer
Josiah Adesola
View post
SHAP values - based on https://github.com/shap/shap
Tutorials

Opening the Black Box: Using SHAP values to explain and enhance Machine Learning models

SHAP stands for "SHapley Additive exPlanations", and is a unified approach that explains the output of any machine learning model; by delivering cohesive explanations it provides invaluable insight into how predictions are being made and opens up immense possibilities in terms of practical applications. In this tutorial we'll explore how to use SHAP values to explain and improve ML models, delving deeper into specific use cases as we go along.

Mykyta Alekseiev - ML Engineering Intern
Mykyta Alekseiev
View post
Testing Classification Models for Fraud Detection with Giskard
Tutorials

Testing Machine Learning Classification models for fraud detection

This article explains how Giskard open-source ML framework can be used for testing ML models and applied to fraud detection. It explores the components of Giskard: the Python library, its user-friendly interface, its installation process, and practical implementation for banknote authentication. The article provides step-by-step guide, code snippets, and leverages the banknote authentication dataset to develop an accurate ML model.

Happiness Omale - Technical writer
Happiness Omale
View post
Robot reading a newspaper generated by open-source generative AI model ControlNet and Stable Diffusion
Tutorials

How to evaluate and load a PyTorch model with Giskard?

This tutorial teaches you how to upload a PyTorch model (built from scratch or pre-trained) to Giskard, and identify potential errors and biases.

Favour Kelvin
Favour Kelvin
View post
Picture illustrating gender bias generated by DALL-E2
Tutorials

How to test the fairness of ML models? The 80% rule to measure the disparate impact

This article provides a step-by-step guide to detecting ethical bias in AI models, using a customer churn model as an example, using the LightGBM ML library. We show how to calculate the disparate impact metric with respect to gender and age, and demonstrate how to implement this metric as a fairness test within Giskard's open-source ML testing framework.

Rabah Abdul Khalek
Rabah Abdul Khalek
View post
Happy green robot generated by open-source generative AI model Stable Diffusion
Tutorials

How to deploy a robust HuggingFace model for sentiment analysis into production?

This tutorial teaches you how to build, test and deploy a Huggingface AI model for sentiment analysis while ensuring its robustness in production.

Princy Pappachan
Princy Pappachan
View post
Metamorphic testing
Tutorials

How to test ML models? #4 🎚 Metamorphic testing

Metamorphic testing are adapted to Machine Learning. This tutorial describes the theory, examples and code to implement it.

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Numerical data drift
Tutorials

How to test ML models? #3 📈 Numerical data drift

Testing the drift of numerical feature distribution is essential in AI. Here are the key metrics you can use to detect it.

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Cars drifting
Tutorials

How to test ML models #2 🧱 Categorical data drift

Testing drift of categorical feature distribution is essential in AI / ML, requiring specific metrics

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Zoom in on the problem
Tutorials

How to test ML models? #1 👉 Introduction

What you need to know before getting started with ML Testing in 3 points

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post