Timezone: »
The increasing reliance on ML models in high-stakes tasks has raised a major concern about fairness violations. Although there has been a surge of work that improves algorithmic fairness, most are under the assumption of an identical training and test distribution. In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and the fairness of such models has been observed to collapse. In this paper, we study how to transfer model fairness under distribution shifts, a widespread issue in practice. We conduct a fine-grained analysis of how the fair model is affected under different types of distribution shifts and find that domain shifts are more challenging than subpopulation shifts. Inspired by the success of self-training in transferring accuracy under domain shifts, we derive a sufficient condition for transferring group fairness. Guided by it, we propose a practical algorithm with fair consistency regularization as the key component. A synthetic dataset benchmark, which covers diverse types of distribution shifts, is deployed for experimental verification of the theoretical findings. Experiments on synthetic and real datasets, including image and tabular data, demonstrate that our approach effectively transfers fairness and accuracy under various types of distribution shifts.
Author Information
Bang An (University of Maryland, College Park)
Zora Che
Mucong Ding (Department of Computer Science, University of Maryland, College Park)
Furong Huang (University of Maryland)
More from the Same Authors
-
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2021 : A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs »
Mucong Ding · Kezhi Kong · Jiuhai Chen · John Kirchenbauer · Micah Goldblum · David P Wipf · Furong Huang · Tom Goldstein -
2022 : SMART: Self-supervised Multi-task pretrAining with contRol Transformers »
Yanchao Sun · shuang ma · Ratnesh Madaan · Rogerio Bonatti · Furong Huang · Ashish Kapoor -
2022 : Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning »
Souradip Chakraborty · Amrit Bedi · Alec Koppel · Furong Huang · Pratap Tokekar · Dinesh Manocha -
2022 : GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint »
Paiheng Xu · Yuhang Zhou · Bang An · Wei Ai · Furong Huang -
2022 : Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning »
Xiangyu Liu · Souradip Chakraborty · Furong Huang -
2022 : Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity »
Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang -
2022 : Faster Hyperparameter Search on Graphs via Calibrated Dataset Condensation »
Mucong Ding · Xiaoyu Liu · Tahseen Rabbani · Furong Huang -
2022 : DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations »
Eitan Borgnia · Jonas Geiping · Valeriia Cherepanova · Liam Fowl · Arjun Gupta · Amin Ghiasi · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 : Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function »
Ruijie Zheng · Xiyao Wang · Huazhe Xu · Furong Huang -
2022 : Contributed Talk: Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning »
Xiangyu Liu · Souradip Chakraborty · Furong Huang -
2022 Spotlight: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2022 : SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication »
Marco Bornstein · Tahseen Rabbani · Evan Wang · Amrit Bedi · Furong Huang -
2022 Poster: Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability »
Roman Levin · Manli Shu · Eitan Borgnia · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 Poster: Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity »
Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang -
2022 Poster: Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2022 Poster: End-to-end Algorithm Synthesis with Recurrent Networks: Extrapolation without Overthinking »
Arpit Bansal · Avi Schwarzschild · Eitan Borgnia · Zeyad Emam · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 Poster: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs »
Mucong Ding · Kezhi Kong · Jiuhai Chen · John Kirchenbauer · Micah Goldblum · David P Wipf · Furong Huang · Tom Goldstein -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2021 Poster: VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization »
Mucong Ding · Kezhi Kong · Jingling Li · Chen Zhu · John Dickerson · Furong Huang · Tom Goldstein -
2021 Poster: Understanding the Generalization Benefit of Model Invariance from a Data Perspective »
Sicheng Zhu · Bang An · Furong Huang