Fairness Implications of GNN-to-MLP Knowledge Distillation
Margaret Capetz · Yizhou Sun · Arjun Subramonian
Abstract
Graph neural networks (GNNs) are increasingly deployed in high-stakes applications where fairness is critical. However, existing data in these real-life scenarios is unreliable, characterized by bias and imbalance. While knowledge distillation (KD) has proven effective to distill GNNs into fully-connected neural networks for scalability, the fairness consequences of such distillation with biased data remain unexplored. Through a systematic evaluation of fairness across synthetic and real-world datasets, we observe that distillation from GNNs to MLPs generally degrades fairness. Our results highlight the need for network-specific considerations when developing mitigation strategies for fairness degradation during knowledge distillation.
Chat is not available.
Successful Page Load