Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Algorithmic Fairness through the Lens of Causality and Privacy

Privacy-Preserving Group Fairness in Cross-Device Federated Learning

Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi


Abstract:

Group fairness ensures that the outcome of machine learning (ML) based decision making systems are not biased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity. Achieving group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not giving access to the clients’ data. As we show in this paper, this conflict between fairness and privacy in FL can be resolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). In doing so, we propose a method for training group-fair ML models in cross device FL under complete and formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values. Empirical evaluations on real world datasets demonstrate the effectiveness of our solution to train fair and accurate ML models in federated cross-device setups with privacy guarantees to the users.

Chat is not available.