Paper accepted at DSAA’21

“Reducing Unintended Bias of ML Models on Tabular and Textual Data” has been accepted at DSAA 2021. Pre-print and full versions will be available soon.

Course at IDAI 2021 Summer School

M. Couceiro and C. Palamidessi will give the course Addressing algorithmic fairness through metrics and explanations in the First Inria-DFKI European Summer School on AI (IDAI 2021). Material Introduction Part 1 – Notions of fairness (C. Palamidessi) Part 2 – Addressing unfairness through unawareness (M. Couceiro)

New demo available

A demonstration of FixOut on selected datasets is available (thanks to F. Bernier and P. Ringot). Please visit this link.

Invited talk at MPML

M. Couceiro will give a talk at the IST seminar series on Mathematics, Physics & Machine Learning (MPML).

Fairness Metrics

Several metrics have been proposed in the literature in order to assess ML model’s fairness. Here we recall some of the most used ones. Disparate Impact (DI) is rooted in the desire for different sensitive demographic groups to experience similar rates of positive decision outcomes (). Given the ML model, represents the predicted class. It • Read More »