Timezone: »

FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
Jinyuan Jia · Zhuowen Yuan · Dinuka Sahabandu · Luyao Niu · Arezoo Rajabi · Bhaskar Ramasubramanian · Bo Li · Radha Poovendran

Thu Dec 14 08:45 AM -- 10:45 AM (PST) @ Great Hall & Hall B1+B2 #805

Federated learning (FL) provides a distributed training paradigm where multiple clients can jointly train a global model without sharing their local data. However, recent studies have shown that FL offers an additional surface for backdoor attacks. For instance, an attacker can compromise a subset of clients and thus corrupt the global model to misclassify an input with a backdoor trigger as the adversarial target. Existing defenses for FL against backdoor attacks usually detect and exclude the corrupted information from the compromised clients based on a static attacker model. However, such defenses are inadequate against dynamic attackers who strategically adapt their attack strategies. To bridge this gap, we model the strategic interactions between the defender and dynamic attackers as a minimax game. Based on the analysis of the game, we design an interactive defense mechanism FedGame. We prove that under mild assumptions, the global model trained with FedGame under backdoor attacks is close to that trained without attacks. Empirically, we compare FedGame with multiple state-of-the-art baselines on several benchmark datasets under various attacks. We show that FedGame can effectively defend against strategic attackers and achieves significantly higher robustness than baselines. Our code is available at: https://github.com/AI-secure/FedGame.

Author Information

Jinyuan Jia (The Pennsylvania State University)
Zhuowen Yuan (UIUC)
Dinuka Sahabandu (University of Washington, Seattle)
Luyao Niu (University of Washington)
Arezoo Rajabi (University of Washington)
Bhaskar Ramasubramanian (Western Washington University)

Bhaskar is an Assistant Professor in Electrical and Computer Engineering at Western Washington University. His research aims to reason about behaviors of autonomous cyber and cyber-physical systems (CPS) using techniques from machine learning, control, optimization, and game theory by developing solutions that: (i) integrate feedback from multiple heterogeneous sources, (ii) are resilient to actions of malicious/ dishonest participants, and (iii) provide provable performance guarantees.

Bo Li (UChicago/UIUC)
Radha Poovendran (University of Washington, Seattle)

More from the Same Authors