G

Rabah Abdul Khalek

Picture illustrating gender bias generated by DALL-E2
Tutorials

How to test the fairness of ML models? The 80% rule to measure the disparate impact

This article provides a step-by-step guide to detecting ethical bias in AI models, using a customer churn model as an example, using the LightGBM ML library. We show how to calculate the disparate impact metric with respect to gender and age, and demonstrate how to implement this metric as a fairness test within Giskard's open-source ML testing framework.

Rabah Abdul Khalek
Rabah Abdul Khalek
View post