Timezone: »
Recent research suggests that predictions made by machine-learning models can amplify biases present in the training data. Mitigating such bias amplification requires a deep understanding of the mechanics in modern machine learning that give rise to that amplification. We perform the first systematic, controlled study into when and how bias amplification occurs. To enable this study, we design a simple image-classification problem in which we can tightly control (synthetic) biases. Our study of this problem reveals that the strength of bias amplification is correlated to measures such as model accuracy, model capacity, and amount of training data. We also find that bias amplification can vary greatly during training. Finally, we find that bias amplification may depend on the difficulty of the classification task relative to the difficulty of recognizing group membership: bias amplification appears to occur primarily when it is easier to recognize group membership than class membership. Our results suggest best practices for training machine-learning models that we hope will help pave the way for the development of better mitigation strategies.
Author Information
Melissa Hall (Research, Facebook (FAIR))
Developing methods for measuring and understanding algorithmic fairness in computer vision systems. My research has focused on bias amplification in image classifiers, zero-shot models' disparities in performance between gender groups, and robust bias measurements in language models.
Laurens van der Maaten (Facebook AI Research)
Laura Gustafson (Facebook AI Research)
Maxwell Jones (Carnegie Mellon University)

Undergraduate at CMU with interests in CV, Deep learning, and ML
Aaron Adcock (Facebook)
More from the Same Authors
-
2020 Session: Orals & Spotlights Track 01: Representation/Relational »
Laurens van der Maaten · Fei Sha -
2019 Poster: PHYRE: A New Benchmark for Physical Reasoning »
Anton Bakhtin · Laurens van der Maaten · Justin Johnson · Laura Gustafson · Ross Girshick -
2010 Workshop: Challenges of Data Visualization »
Barbara Hammer · Laurens van der Maaten · Fei Sha · Alexander Smola -
2010 Poster: On Herding and the Perceptron Cycling Theorem »
Andrew E Gelfand · Yutian Chen · Laurens van der Maaten · Max Welling -
2010 Poster: Latent Variable Models for Predicting File Dependencies in Large-Scale Software Development »
Diane Hu · Laurens van der Maaten · Youngmin Cho · Lawrence Saul · Sorin Lerner -
2008 Demonstration: Visualizing NIPS Cooperations using Multiple Maps t-SNE »
Laurens van der Maaten · Geoffrey E Hinton