Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership

FeO2: Federated Learning with Opt-Out Differential Privacy

Nasser Aldaghri · Hessam Mahdavifar · Ahmad Beirami


Abstract: The trained model in federated learning (FL) might still leak private client information through model updates, even if clients' data is kept local. Differential privacy (DP) can be employed to provide privacy guarantees in FL, typically at the cost of degraded model performance. One fundamental feature of FL is \emph{heterogeneity}. While data and system heterogeneity have been studied, heterogeneity in privacy requirements has not been addressed in FL. In this work, we consider a heterogeneous privacy setup where clients are considered private by default, but some of them choose to opt out of privacy. We propose a new algorithm for personalized federated learning with opt-out DP, referred to as \emph{FeO2}, along with a discussion on its advantages compared to the baselines of private and personalized FL algorithms. We show the success of \emph{FeO2} in a simplified federated point estimation problem. Finally, we conduct extensive experiments on federated datasets to show the gain in performance for \emph{FeO2} compared to the baseline private and personalized federated learning algorithms. We observe that \emph{FeO2} provides significant gains for the global model as well as the personalized models compared to the baseline private federated learning. Additionally, we show that clients who opt out can gain up to $3.5\%$ in performance compared to private clients for the considered datasets, illustrating an incentive for clients to opt out.

Chat is not available.