Timezone: »
With an ever-increasing number of smart edge devices with computation and communication constraints, Federated Learning (FL) is a promising paradigm for learning from distributed devices and their data. Typical approaches to FL aim to learn a single model that simultaneously performs well for all clients. But such an approach may be ineffective when the clients' data distributions are heterogeneous. In these cases, we aim to learn personalized models for each client's data yet still leverage shared information across clients. A critical avenue that may allow for such personalization is the presence of client-specific side information available to each client, such as client embeddings obtained from domain-specific knowledge, pre-trained models, or simply one-hot encodings. In this work, we propose a new FL framework for utilizing a general form of client-specific side information for personalized federated learning. We prove that incorporating side information can improve model performance for simplified multi-task linear regression and matrix completion problems. Further, we validate these results with image classification experiments on Omniglot, CIFAR-10, and CIFAR-100, revealing that proper use of side information can be beneficial for personalization.
Author Information
Liam Collins (The University of Texas at Austin)
Enmao Diao (Duke University)

I am a fourth-year Ph.D. candidate advised by Prof. Vahid Tarokh in Electrical Engineering at Duke University, Durham, North Carolina, USA. I was born in Chengdu, Sichuan, China in 1994. I received the B.S. degree in Computer Science and Electrical Engineering from Georgia Institute of Technology, Georgia, USA, in 2016 and the M.S. degree in Electrical Engineering from Harvard University, Cambridge, USA, in 2018.
Tanya Roosta (Amazon)
Jie Ding (University of Minnesota)
Tao Zhang (Alexa AI)
More from the Same Authors
-
2021 : Communication-Efficient Federated Learning for Neural Machine Translation »
Tanya Roosta · Peyman Passban · Ankit Chadha -
2022 : Building Large Machine Learning Models from Small Distributed Models: A Layer Matching Approach »
xinwei zhang · Bingqing Song · Mehrdad Honarkhah · Jie Ding · Mingyi Hong -
2023 Poster: A Unified Framework for Inference-Stage Backdoor Defenses »
Xun Xian · Ganghua Wang · Jayanth Srinivasa · Ashish Kundu · Xuan Bi · Mingyi Hong · Jie Ding -
2023 : Interactive Panel Discussion »
Tanya Roosta · Tim Dettmers · Minjia Zhang · Nazneen Rajani -
2022 Spotlight: Self-Aware Personalized Federated Learning »
Huili Chen · Jie Ding · Eric W. Tramel · Shuang Wu · Anit Kumar Sahu · Salman Avestimehr · Tao Zhang -
2022 Poster: Self-Aware Personalized Federated Learning »
Huili Chen · Jie Ding · Eric W. Tramel · Shuang Wu · Anit Kumar Sahu · Salman Avestimehr · Tao Zhang -
2022 Poster: GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations »
Enmao Diao · Jie Ding · Vahid Tarokh -
2022 Poster: FedAvg with Fine Tuning: Local Updates Lead to Representation Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2022 Poster: SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training »
Enmao Diao · Jie Ding · Vahid Tarokh -
2021 : Communication-Efficient Federated Learning for Neural Machine Translation »
Tanya Roosta · Peyman Passban · Ankit Chadha -
2020 Poster: Task-Robust Model-Agnostic Meta-Learning »
Liam Collins · Aryan Mokhtari · Sanjay Shakkottai -
2020 Poster: Assisted Learning: A Framework for Multi-Organization Learning »
Xun Xian · Xinran Wang · Jie Ding · Reza Ghanadan -
2020 Spotlight: Assisted Learning: A Framework for Multi-Organization Learning »
Xun Xian · Xinran Wang · Jie Ding · Reza Ghanadan -
2019 Poster: Gradient Information for Representation and Modeling »
Jie Ding · Robert Calderbank · Vahid Tarokh