FIXOut

FaIrness through eXplanations and feature dropOut

Algorithmic decisions are increasingly present in several aspects of our lives, e.g., loan grant decision, terrorism detection, prediction of criminal recidivism, and similar social and economical applications. Many of these algorithmic decisions are taken without human supervision and through decision making processes that are not transparent. This raises concerns regarding the potential bias of these processes towards certain groups of society. Such unfair outcomes not only affect human rights, but they also undermine public trust in Machine Learning (ML).

FixOut addresses fairness issues of ML models based on decision outcomes, and shows how the simple idea of “feature dropout” followed by an “ensemble approach” can improve model fairness.

Originally, it was conceived to tackle process fairness of ML Models based on decision outcomes. For that FIXOut uses an explanation method to assess a model’s reliance on salient or sensitive features, that is integrated in a human-centered workflow that outputs a classifier M’ that does not compromise M’s performance while improving it in process fairness as well as in other fairness metrics.


Contact: {miguel.couceiro, guilherme.alves-da-silva} at loria.fr


FixOut is now part of a startup project. Check out the website here.