Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23)

Backdoor Threats from Compromised Foundation Models to Federated Learning

Xi Li · Songhe Wang · Chen Wu · Hao Zhou · Jiaqi Wang

Keywords: [ Foundation Model ] [ Backdoor Attack ] [ Adversarial Learning ] [ federated learning ]


Abstract:

Federated learning (FL) represents a novel paradigm to machine learning, addressing critical issues related to data privacy and security, yet suffering from data insufficiency and imbalance.The emergence of foundation models (FMs) provides a promising solution to the problems with FL.For instance, FMs could serve as teacher models or good starting points for FL.However, the integration of FM in FL presents a new challenge, exposing the FL systems to potential threats. This paper investigates the robustness of FL incorporating FMs by assessing their susceptibility to backdoor attacks.Contrary to classic backdoor attacks against FL, the proposed attack (1) does not require the attacker fully involved in the FL process; (2) poses a significant risk in practical FL scenarios; (3) is able to evade existing robust FL frameworks/ FL backdoor defenses; (4) underscores the researches on the robustness of FL systems integrated with FMs.The effectiveness of the proposed attack is demonstrated by extensive experiments with various well-known models and benchmark datasets encompassing both text and image classification domains.

Chat is not available.