FaIrness through eXplanations and feature dropOut

Algorithmic decisions are increasingly present in several aspects of our lives, e.g., loan grant decision, terrorism detection, prediction of criminal recidivism, and similar social and economical applications. Many of these algorithmic decisions are taken without human supervision and through decision making processes that are not transparent. This raises concerns regarding the potential bias of these processes towards certain groups of society. Such unfair outcomes not only affect human rights, but they also undermine public trust in Machine Learning (ML).

FixOut addresses fairness issues of ML models based on decision outcomes, and shows how the simple idea of “feature dropout” followed by an “ensemble approach” can improve model fairness.

Originally, it was conceived to tackle process fairness of ML Models based on decision outcomes. For that it uses an explanation method to assess a model’s reliance on salient or sensitive features, that is integrated in a human-centered workflow: given a classifier M, a dataset D, a set F of sensitive features and an explanation method of choice, FIXOut outputs a competitive classifier M’ that improves in process fairness as well as in other fairness metrics.

Contact: {miguel.couceiro, guilherme.alves-da-silva} at