Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership

Robust and Personalized Federated Learning with Spurious Features: an Adversarial Approach

Xiaoyang Wang · Han Zhao · Klara Nahrstedt · Sanmi Koyejo


Abstract:

The most common approach for personalized federated learning is fine-tuning the global machine learning model to each client. While this addresses some issues of statistical diversity, we find that such personalization methods are vulnerable to spurious features, leading to bias and sacrificing generalization. Nevertheless, debiasing the personalized models is difficult. To this end, we propose a strategy to mitigate the effect of spurious features based on an observation that the global model in the federated learning step has a low bias degree due to statistical diversity. Then, we estimate and mitigate the bias degree difference between the personalized and global models using adversarial transferability in the personalization step. We theoretically establish the connection between the adversarial transferability and the bias degree difference between the global and personalized models. Empirical results on MNIST, CelebA, and Coil20 datasets show that our method improves the accuracy of the personalized model on the bias-conflicting data samples by up to 14.3%, compared to existing personalization approaches, while preserving the benefit of enhanced average accuracy from fine-tuning.

Chat is not available.