Paper accepted at DSAA’21
“Reducing Unintended Bias of ML Models on Tabular and Textual Data” has been accepted at DSAA 2021. Pre-print and full versions will be available soon.
“Reducing Unintended Bias of ML Models on Tabular and Textual Data” has been accepted at DSAA 2021. Pre-print and full versions will be available soon.
M. Couceiro and C. Palamidessi will give the course Addressing algorithmic fairness through metrics and explanations in the First Inria-DFKI European Summer School on AI (IDAI 2021). Material Introduction Part 1 – Notions of fairness (C. Palamidessi) Part 2 – Addressing unfairness through unawareness (M. Couceiro)
Fair and explainable models I (M. Couceiro and L. Galarraga) Fair and explainable models II (M. Couceiro and L. Galarraga)
A demonstration of FixOut on selected datasets is available (thanks to F. Bernier and P. Ringot). Please visit this link.
G. Alves gave a talk at PDIA’21 (Perspectives et Défis de l’IA) https://afia.asso.fr/pdia21/
M. Couceiro will give a talk at the IST seminar series on Mathematics, Physics & Machine Learning (MPML). https://mpml.tecnico.ulisboa.pt/seminars?id=5976
The first tutorial of FixOut is now available. Tutorial 1 shows how to use FixOut on tabular data with LIME explanations. Start guide
Several metrics have been proposed in the literature in order to assess ML model’s fairness. Here we recall some of the most used ones. Disparate Impact (DI) is rooted in the desire for different sensitive demographic groups to experience similar rates of positive decision outcomes (). Given the ML model, represents the predicted class. It • Read More »