Differentially-Private Federated Linear Bandits
Abhimanyu Dubey, Alex `Sandy' Pentland
Spotlight presentation: Orals & Spotlights Track 20: Social/Adversarial Learning
on 2020-12-09T07:10:00-08:00 - 2020-12-09T07:20:00-08:00
on 2020-12-09T07:10:00-08:00 - 2020-12-09T07:20:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: The rapid proliferation of decentralized learning systems mandates the need for differentially-private cooperative learning. In this paper, we study this in context of the contextual linear bandit: we consider a collection of agents cooperating to solve a common contextual bandit, while ensuring that their communication remains private. For this problem, we devise FedUCB, a multiagent private algorithm for both centralized and decentralized (peer-to-peer) federated learning. We provide a rigorous technical analysis of its utility in terms of regret, improving several results in cooperative bandit learning, and provide rigorous privacy guarantees as well. Our algorithms provide competitive performance both in terms of pseudoregret bounds and empirical benchmark performance in various multi-agent settings.