Timezone: »
With the recent expanding attention of machine learning researchers and practitioners to fairness, there is a void of a common framework to analyze and compare the capabilities of proposed models in deep representation learning. In this paper, we evaluate different fairness methods trained with deep neural networks on a common synthetic dataset and a real-world dataset to obtain better insights on how these methods work. In particular, we train about 3000 different models in various setups, including imbalanced and correlated data configurations, to verify the limits of the current models and better understand in which setups they are subject to failure. Our results show that the bias of models increase as datasets become more imbalanced or datasets attributes become more correlated, the level of dominance of correlated sensitive dataset features impact bias, and the sensitive information remains in the latent representation even when bias-mitigation algorithms are applied. Overall, we present a dataset, propose various challenging evaluation setups, and rigorously evaluate recent promising bias-mitigation algorithms in a common framework and publicly release this benchmark, hoping the research community would take it as a common entry point for fair deep learning.
Author Information
Charan Reddy (Mila)
Masters student at Mila lab, Montreal. Bachelors in CS from IIT Kharagpur. Working on many exciting projects in the fields of Continual Learning, Meta-learning, Optimization, Fairness, Differential Privacy. Keep in touch to know the amazing research we are doing :)
Deepak Sharma (McGill University)
Soroush Mehri (Microsoft Research Montreal)
Adriana Romero Soriano (Facebook AI Research)
Samira Shabanian (Microsoft Research)
Sina Honari (EPFL)
More from the Same Authors
-
2021 Spotlight: Instance-Conditioned GAN »
Arantxa Casanova · Marlene Careil · Jakob Verbeek · Michal Drozdzal · Adriana Romero Soriano -
2021 : SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation »
Robin Chan · Krzysztof Lis · Svenja Uhlemeyer · Hermann Blum · Sina Honari · Roland Siegwart · Pascal Fua · Mathieu Salzmann · Matthias Rottmann -
2021 Poster: Instance-Conditioned GAN »
Arantxa Casanova · Marlene Careil · Jakob Verbeek · Michal Drozdzal · Adriana Romero Soriano -
2021 Poster: Active 3D Shape Reconstruction from Vision and Touch »
Edward Smith · David Meger · Luis Pineda · Roberto Calandra · Jitendra Malik · Adriana Romero Soriano · Michal Drozdzal -
2021 Poster: Parameter Prediction for Unseen Deep Architectures »
Boris Knyazev · Michal Drozdzal · Graham Taylor · Adriana Romero Soriano -
2019 Workshop: Science meets Engineering of Deep Learning »
Levent Sagun · Caglar Gulcehre · Adriana Romero Soriano · Negar Rostamzadeh · Nando de Freitas -
2019 : Welcoming remarks and introduction »
Levent Sagun · Caglar Gulcehre · Adriana Romero Soriano · Negar Rostamzadeh · Nando de Freitas -
2019 Poster: On Adversarial Mixup Resynthesis »
Christopher Beckham · Sina Honari · Alex Lamb · Vikas Verma · Farnoosh Ghadiri · R Devon Hjelm · Yoshua Bengio · Chris Pal -
2018 Poster: Unsupervised Depth Estimation, 3D Face Rotation and Replacement »
Joel Ruben Antony Moniz · Christopher Beckham · Simon Rajotte · Sina Honari · Chris Pal