Timezone: »
In federated learning (FL), a global model is learned by aggregating model updates computed from a set of client nodes, each having their own data. A key challenge in FL is the heterogeneity of data across clients whose data distributions differ from one another. Standard FL algorithms perform multiple gradient steps before synchronizing the model, which can lead to clients overly minimizing their local objective and diverging from other client solutions. We demonstrate that in such a setting individual client models experience ``catastrophic forgetting" with respect to other client data. We propose a simple yet efficient approach that modifies the cross-entropy objective on a per-client basis such that classes outside a client's label set are shielded from abrupt representation change. Through empirical evaluations, we demonstrate our approach can alleviate this problem, especially under the most challenging FL settings with high heterogeneity, low client participation.
Author Information
Gwen Legate (Université de Montréal)
Lucas Page-Caccia (McGill University)
Eugene Belilovsky (Concordia University / Mila)
More from the Same Authors
-
2022 : Poly-S: Analyzing and Improving Polytropon for Data-Efficient Multi-Task Learning »
Lucas Page-Caccia · Edoardo Maria Ponti · Liyuan Liu · Matheus Pereira · Nicolas Le Roux · Alessandro Sordoni -
2022 : Imitation from Observation With Bootstrapped Contrastive Learning »
Medric Sonwa · Johanna Hansen · Eugene Belilovsky -
2022 : Building a Subspace of Policies for Scalable Continual Learning »
Jean-Baptiste Gaya · Thang Long Doan · Lucas Page-Caccia · Laure Soulier · Ludovic Denoyer · Roberta Raileanu -
2022 : Imitation from Observation With Bootstrapped Contrastive Learning »
Medric Sonwa · Johanna Hansen · Eugene Belilovsky -
2022 : Deceiving the CKA Similarity Measure in Deep Learning »
MohammadReza Davari · Stefan Horoi · Amine Natik · Guillaume Lajoie · Guy Wolf · Eugene Belilovsky -
2022 : Adversarial Attacks on Feature Visualization Methods »
Michael Eickenberg · Eugene Belilovsky · Jonathan Marty -
2020 Poster: Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning »
Massimo Caccia · Pau Rodriguez · Oleksiy Ostapenko · Fabrice Normandin · Min Lin · Lucas Page-Caccia · Issam Hadj Laradji · Irina Rish · Alexandre Lacoste · David Vázquez · Laurent Charlin -
2019 Poster: Online Continual Learning with Maximal Interfered Retrieval »
Rahaf Aljundi · Eugene Belilovsky · Tinne Tuytelaars · Laurent Charlin · Massimo Caccia · Min Lin · Lucas Page-Caccia