Timezone: »
Fairness-aware learning involves designing algorithms that do not discriminate with respect to some sensitive feature (e.g., race or gender). Existing work on the problem operates under the assumption that the sensitive feature available in one's training sample is perfectly reliable. This assumption may be violated in many real-world cases: for example, respondents to a survey may choose to conceal or obfuscate their group identity out of fear of potential discrimination. This poses the question of whether one can still learn fair classifiers given noisy sensitive features. In this paper, we answer the question in the affirmative: we show that if one measures fairness using the mean-difference score, and sensitive features are subject to noise from the mutually contaminated learning model, then owing to a simple identity we only need to change the desired fairness-tolerance. The requisite tolerance can be estimated by leveraging existing noise-rate estimators from the label noise literature. We finally show that our procedure is empirically effective on two case-studies involving sensitive feature censoring.
Author Information
Alex Lamy (Columbia University)
Ziyuan Zhong (Columbia University)
Aditya Menon (Google)
Nakul Verma (Columbia University)
More from the Same Authors
-
2020 : Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics »
Bo Cowgill · Fabrizio Dell'Acqua · Augustin Chaintreau · Nakul Verma · Samuel Deng · Daniel Hsu -
2022 : Effect of mixup Training on Representation Learning »
Arslan Chaudhry · Aditya Menon · Andreas Veit · Sadeep Jayasumana · Srikumar Ramalingam · Sanjiv Kumar -
2023 : Think before you speak: Training Language Models With Pause Tokens »
Sachin Goyal · Ziwei Ji · Ankit Rawat · Aditya Menon · Sanjiv Kumar · Vaishnavh Nagarajan -
2023 Poster: ResMem: Learn what you can and memorize the rest »
Zitong Yang · MICHAL LUKASIK · Vaishnavh Nagarajan · Zonglin Li · Ankit Rawat · Manzil Zaheer · Aditya Menon · Sanjiv Kumar -
2023 Poster: On student-teacher deviations in distillation: does it pay to disobey? »
Vaishnavh Nagarajan · Aditya Menon · Srinadh Bhojanapalli · Hossein Mobahi · Sanjiv Kumar -
2023 Poster: When Does Confidence-Based Cascade Deferral Suffice? »
Wittawat Jitkrittum · Neha Gupta · Aditya Menon · Harikrishna Narasimhan · Ankit Rawat · Sanjiv Kumar -
2022 Poster: Post-hoc estimators for learning to defer to an expert »
Harikrishna Narasimhan · Wittawat Jitkrittum · Aditya Menon · Ankit Rawat · Sanjiv Kumar -
2021 Poster: Training Over-parameterized Models with Non-decomposable Objectives »
Harikrishna Narasimhan · Aditya Menon -
2020 Poster: Robust large-margin learning in hyperbolic space »
Melanie Weber · Manzil Zaheer · Ankit Singh Rawat · Aditya Menon · Sanjiv Kumar -
2019 Poster: Metric Learning for Adversarial Robustness »
Chengzhi Mao · Ziyuan Zhong · Junfeng Yang · Carl Vondrick · Baishakhi Ray -
2019 Poster: Multilabel reductions: what is my loss optimising? »
Aditya Menon · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar -
2019 Spotlight: Multilabel reductions: what is my loss optimising? »
Aditya Menon · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar