Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning: Recent Advances and New Challenges

VOTING-BASED APPROACHES FOR DIFFERENTIALLY PRIVATE FEDERATED LEARNING

Yuqing Zhu · Xiang Yu · Yi-Hsuan Tsai · Francesco Pittaluga · Masoud Faraki · Manmohan Chandraker · Yu-Xiang Wang


Abstract: Differentially Private Federated Learning (DPFL) is an emerging field with many applications. Gradient averaging based DPFL methods require costly communication rounds and hardly work with large-capacity models, due to the explicit dimension dependence in its added noise. In this paper, inspired by the non-federated knowledge transfer privacy learning methods, we design two DPFL algorithms (AE-DPFL and kNN-DPFL) that provide provable DP guarantees for both instance-level and agent-level privacy regimes. By voting among the data labels returned from each local model, instead of averaging the gradients, our algorithms avoid the dimension dependence and significantly reduces the communication cost. Theoretically, by applying secure multi-party computation, we could exponentially amplify the (data-dependent) privacy guarantees when the margin of the voting scores are distinctive. Empirical evaluation on both instance and agent level DP is conducted across five datasets, showing 2% to 12% higher accuracy when privacy cost is the same compared to DP-FedAvg, or less than $65\%$ privacy cost when accuracy aligns the same.

Chat is not available.