Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning: Recent Advances and New Challenges

Trusted Aggregation (TAG): Model Filtering Backdoor Defense In Federated Learning

Joseph Lavond · Minhao Cheng · Yao Li


Abstract:

Federated Learning is a framework for training machine learning models from multiple local data sets without access to the data. A shared model is jointly learned through an interactive process between server and clients that combines locally learned model gradients or weights. However, the lack of data transparency naturally raises concerns about model security. Recently, several state-of-the-art backdoor attacks have been proposed, which achieve high attack success rates while simultaneously being difficult to detect, leading to compromised federated learning models. In this paper, motivated by differences in the output layer distribution between models trained with and without the presence of backdoor attacks, we propose a defense method that can prevent backdoor attacks from influencing the model while maintaining the accuracy of the original classification task.

Chat is not available.