Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Women in Machine Learning

Towards Private and Fair Federated Learning

Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi


Abstract:

Existing bias mitigation algorithms in machine learning (ML) based decision-making systems assume that the sensitive attributes of the user are available to a central entity. This violates the privacy of the users. Achieving fairness in Federated Learning (FL), which intends to protect the raw data of the users, is a challenge as the bias mitigation algorithms inherently require access to sensitive attributes. We work towards resolving the conflict of privacy and fairness by combining FL with Secure Multi-Party Computation and Differential Privacy. In our work, we propose methods to train group-fair models in cross-device FL under complete privacy guarantees. We demonstrate the effectiveness of our solution on two real-world datasets in achieving group fairness.

Chat is not available.