Timezone: »
In real-world federated learning scenarios, participants could have their own personalized labels incompatible with those from other clients, due to using different label permutations or tackling completely different tasks or domains. However, most existing FL approaches cannot effectively tackle such extremely heterogeneous scenarios since they often assume that (1) all participants use a synchronized set of labels, and (2) they train on the same tasks from the same domain. In this work, to tackle these challenges, we introduce Factorized-FL, which allows to effectively tackle label- and task-heterogeneous federated learning settings by factorizing the model parameters into a pair of rank-1 vectors, where one captures the common knowledge across different labels and tasks and the other captures knowledge specific to the task for each local model. Moreover, based on the distance in the client-specific vector space, Factorized-FL performs a selective aggregation scheme to utilize only the knowledge from the relevant participants for each client. We extensively validate our method on both label- and domain-heterogeneous settings, on which it outperforms the state-of-the-art personalized federated learning methods. The code is available at https://github.com/wyjeong/Factorized-FL.
Author Information
Wonyong Jeong (Korea Advanced Institute of Science and Technology)
Sung Ju Hwang (KAIST, AITRICS)
More from the Same Authors
-
2021 Spotlight: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning »
Hayeon Lee · Sewoong Lee · Song Chong · Sung Ju Hwang -
2021 Spotlight: Task-Adaptive Neural Network Search with Meta-Contrastive Learning »
Wonyong Jeong · Hayeon Lee · Geon Park · Eunyoung Hyung · Jinheon Baek · Sung Ju Hwang -
2021 : Skill-based Meta-Reinforcement Learning »
Taewook Nam · Shao-Hua Sun · Karl Pertsch · Sung Ju Hwang · Joseph Lim -
2021 : Skill-based Meta-Reinforcement Learning »
Taewook Nam · Shao-Hua Sun · Karl Pertsch · Sung Ju Hwang · Joseph Lim -
2022 Poster: Learning to Generate Inversion-Resistant Model Explanations »
Hoyong Jeong · Suyoung Lee · Sung Ju Hwang · Sooel Son -
2022 : SPRINT: Scalable Semantic Policy Pre-training via Language Instruction Relabeling »
Jesse Zhang · Karl Pertsch · Jiahui Zhang · Taewook Nam · Sung Ju Hwang · Xiang Ren · Joseph Lim -
2022 : SPRINT: Scalable Semantic Policy Pre-training via Language Instruction Relabeling »
Jesse Zhang · Karl Pertsch · Jiahui Zhang · Taewook Nam · Sung Ju Hwang · Xiang Ren · Joseph Lim -
2022 Poster: Graph Self-supervised Learning with Accurate Discrepancy Learning »
Dongki Kim · Jinheon Baek · Sung Ju Hwang -
2022 Poster: Set-based Meta-Interpolation for Few-Task Meta-Learning »
Seanie Lee · Bruno Andreis · Kenji Kawaguchi · Juho Lee · Sung Ju Hwang -
2021 Poster: Edge Representation Learning with Hypergraphs »
Jaehyeong Jo · Jinheon Baek · Seul Lee · Dongki Kim · Minki Kang · Sung Ju Hwang -
2021 Poster: Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Generation »
Soojung Yang · Doyeong Hwang · Seul Lee · Seongok Ryu · Sung Ju Hwang -
2021 Poster: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning »
Hayeon Lee · Sewoong Lee · Song Chong · Sung Ju Hwang -
2021 Poster: Task-Adaptive Neural Network Search with Meta-Contrastive Learning »
Wonyong Jeong · Hayeon Lee · Geon Park · Eunyoung Hyung · Jinheon Baek · Sung Ju Hwang -
2021 Poster: Mini-Batch Consistent Slot Set Encoder for Scalable Set Encoding »
Bruno Andreis · Jeffrey Willette · Juho Lee · Sung Ju Hwang