Timezone: »
GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint
Paiheng Xu · Yuhang Zhou · Bang An · Wei Ai · Furong Huang
Event URL: https://openreview.net/forum?id=DCQmL-gXGOG »
Graph Neural Networks (GNNs) have proven their versatility over diverse scenarios. With increasing considerations of societal fairness, many studies focus on algorithmic fairness in GNNs. Most of them aim to improve fairness at the group level, while only a few works focus on individual fairness, which attempts to give similar predictions to similar individuals for a specific task. We expect that such an individual fairness promotion framework should be compatible with both discrete and continuous task-specific similarity measures for individual fairness and balanced between utility (e.g., classification accuracy) and fairness. Fairness promotion frameworks are generally desired to be computationally efficient and compatible with various GNN model designs. With previous work failing to achieve all these goals, we propose a novel method $\textbf{GFairHint}$ for promoting individual fairness in GNNs, which learns a "fairness hint" through an auxiliary link prediction task. We empirically evaluate our methods on five real-world graph datasets that cover both discrete and continuous settings for individual fairness similarity measures, with three popular backbone GNN models. The proposed method achieves the best fairness results in almost all combinations of datasets with various backbone models, while generating comparable utility results, with much less computation cost compared to the previous state-of-the-art model (SoTA).
Graph Neural Networks (GNNs) have proven their versatility over diverse scenarios. With increasing considerations of societal fairness, many studies focus on algorithmic fairness in GNNs. Most of them aim to improve fairness at the group level, while only a few works focus on individual fairness, which attempts to give similar predictions to similar individuals for a specific task. We expect that such an individual fairness promotion framework should be compatible with both discrete and continuous task-specific similarity measures for individual fairness and balanced between utility (e.g., classification accuracy) and fairness. Fairness promotion frameworks are generally desired to be computationally efficient and compatible with various GNN model designs. With previous work failing to achieve all these goals, we propose a novel method $\textbf{GFairHint}$ for promoting individual fairness in GNNs, which learns a "fairness hint" through an auxiliary link prediction task. We empirically evaluate our methods on five real-world graph datasets that cover both discrete and continuous settings for individual fairness similarity measures, with three popular backbone GNN models. The proposed method achieves the best fairness results in almost all combinations of datasets with various backbone models, while generating comparable utility results, with much less computation cost compared to the previous state-of-the-art model (SoTA).
Author Information
Paiheng Xu (University of Maryland, College Park)
Yuhang Zhou (University of Maryland, College Park)
Bang An (University of Maryland, College Park)
Wei Ai (University of Maryland, College Park)
Furong Huang (University of Maryland)
More from the Same Authors
-
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2022 : SMART: Self-supervised Multi-task pretrAining with contRol Transformers »
Yanchao Sun · shuang ma · Ratnesh Madaan · Rogerio Bonatti · Furong Huang · Ashish Kapoor -
2022 : Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning »
Souradip Chakraborty · Amrit Bedi · Alec Koppel · Furong Huang · Pratap Tokekar · Dinesh Manocha -
2022 : Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning »
Xiangyu Liu · Souradip Chakraborty · Furong Huang -
2022 : Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity »
Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang -
2022 : Faster Hyperparameter Search on Graphs via Calibrated Dataset Condensation »
Mucong Ding · Xiaoyu Liu · Tahseen Rabbani · Furong Huang -
2022 : DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations »
Eitan Borgnia · Jonas Geiping · Valeriia Cherepanova · Liam Fowl · Arjun Gupta · Amin Ghiasi · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 : Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function »
Ruijie Zheng · Xiyao Wang · Huazhe Xu · Furong Huang -
2022 : Contributed Talk: Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning »
Xiangyu Liu · Souradip Chakraborty · Furong Huang -
2022 Spotlight: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2022 : SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication »
Marco Bornstein · Tahseen Rabbani · Evan Wang · Amrit Bedi · Furong Huang -
2022 Poster: Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability »
Roman Levin · Manli Shu · Eitan Borgnia · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 Poster: Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity »
Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang -
2022 Poster: Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2022 Poster: End-to-end Algorithm Synthesis with Recurrent Networks: Extrapolation without Overthinking »
Arpit Bansal · Avi Schwarzschild · Eitan Borgnia · Zeyad Emam · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 Poster: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2022 Poster: Transferring Fairness under Distribution Shifts via Fair Consistency Regularization »
Bang An · Zora Che · Mucong Ding · Furong Huang -
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2021 Poster: Understanding the Generalization Benefit of Model Invariance from a Data Perspective »
Sicheng Zhu · Bang An · Furong Huang