Timezone: »
To Federate or Not To Federate: Incentivizing Client Participation in Federated Learning
Yae Jee Cho · Divyansh Jhunjhunwala · Tian Li · Virginia Smith · Gauri Joshi
Event URL: https://openreview.net/forum?id=pG08eM0CQba »
Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model without directly sharing their local data. Although there is an abundance of research on improving the speed, efficiency, and accuracy of federated training, most works implicitly assume that all clients are willing to participate in the FL framework. Due to data heterogeneity, however, the global model may not work well for some clients, and they may instead choose to use their own local model. Such disincentivization of clients can be problematic from the server's perspective because having more participating clients yields a better global model, and offers better privacy guarantees to the participating clients. In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model by dynamically adjusting the aggregation weights assigned to their updates. Our experiments show that IncFL increases the number of incentivized clients by $30$-$55\%$ compared to standard federated training algorithms, and can also improve the generalization performance of the global model on unseen clients.
Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model without directly sharing their local data. Although there is an abundance of research on improving the speed, efficiency, and accuracy of federated training, most works implicitly assume that all clients are willing to participate in the FL framework. Due to data heterogeneity, however, the global model may not work well for some clients, and they may instead choose to use their own local model. Such disincentivization of clients can be problematic from the server's perspective because having more participating clients yields a better global model, and offers better privacy guarantees to the participating clients. In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model by dynamically adjusting the aggregation weights assigned to their updates. Our experiments show that IncFL increases the number of incentivized clients by $30$-$55\%$ compared to standard federated training algorithms, and can also improve the generalization performance of the global model on unseen clients.
Author Information
Yae Jee Cho (Carnegie Mellon University)
Divyansh Jhunjhunwala (Carnegie Mellon University)
Tian Li (CMU)
Virginia Smith (Carnegie Mellon University)
Gauri Joshi (Carnegie Mellon University)
More from the Same Authors
-
2022 : Differentially Private Adaptive Optimization with Delayed Preconditioners »
Tian Li · Manzil Zaheer · Ken Liu · Sashank Reddi · H. Brendan McMahan · Virginia Smith -
2022 : Differentially Private Adaptive Optimization with Delayed Preconditioners »
Tian Li · Manzil Zaheer · Ken Liu · Sashank Reddi · H. Brendan McMahan · Virginia Smith -
2022 : Motley: Benchmarking Heterogeneity and Personalization in Federated Learning »
Shanshan Wu · Tian Li · Zachary Charles · Yu Xiao · Ken Liu · Zheng Xu · Virginia Smith -
2022 : Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts »
Amrith Setlur · Don Dennis · Benjamin Eysenbach · Aditi Raghunathan · Chelsea Finn · Virginia Smith · Sergey Levine -
2022 : Federated Learning under Distributed Concept Drift »
Ellango Jothimurugesan · Kevin Hsieh · Jianyu Wang · Gauri Joshi · Phillip Gibbons -
2023 Poster: Progressive Knowledge Distillation: Constructing Ensembles for Efficient Inference »
Don Dennis · Abhishek Shetty · Anish Prasad Sevekari · Kazuhito Koishida · Virginia Smith -
2023 Poster: Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift »
Saurabh Garg · Amrith Setlur · Zachary Lipton · Sivaraman Balakrishnan · Virginia Smith · Aditi Raghunathan -
2023 Poster: Variance-Reduced Gradient Estimation via Noise-Reuse in Online Evolution Strategies »
Oscar Li · James Harrison · Jascha Sohl-Dickstein · Virginia Smith · Luke Metz -
2023 Poster: Correlation Aware Distributed Vector Mean Estimation »
Shuli Jiang · PRANAY SHARMA · Gauri Joshi -
2023 Workshop: Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23) »
Jinghui Chen · Lixin Fan · Gauri Joshi · Sai Praneeth Karimireddy · Stacy Patterson · Shiqiang Wang · Han Yu -
2023 : Evaluating Large-Scale Learning Systems, Virginia Smith »
Virginia Smith -
2022 : Panel »
Virginia Smith · Michele Covell · Daniel Severo · Christopher Schroers -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2022 : Contributed Talks 1 »
Courtney Paquette · Tian Li · Guy Kornowski -
2022 Workshop: Federated Learning: Recent Advances and New Challenges »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2022 Poster: On Privacy and Personalization in Cross-Silo Federated Learning »
Ken Liu · Shengyuan Hu · Steven Wu · Virginia Smith -
2022 Poster: Adversarial Unlearning: Reducing Confidence Along Adversarial Directions »
Amrith Setlur · Benjamin Eysenbach · Virginia Smith · Sergey Levine -
2021 : Q&A with A/Professor Virginia Smith »
Virginia Smith -
2021 : Keynote Talk: Fair or Robust: Addressing Competing Constraints in Federated Learning (Virginia Smith) »
Virginia Smith -
2021 Poster: Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution »
Amrith Setlur · Oscar Li · Virginia Smith -
2021 Poster: On Large-Cohort Training for Federated Learning »
Zachary Charles · Zachary Garrett · Zhouyuan Huo · Sergei Shmulyian · Virginia Smith -
2021 Poster: Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing »
Mikhail Khodak · Renbo Tu · Tian Li · Liam Li · Maria-Florina Balcan · Virginia Smith · Ameet Talwalkar -
2021 Poster: Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation »
Divyansh Jhunjhunwala · Ankur Mallick · Advait Gadhikar · Swanand Kadhe · Gauri Joshi -
2020 Tutorial: (Track1) Federated Learning and Analytics: Industry Meets Academia Q&A »
Peter Kairouz · Brendan McMahan · Virginia Smith -
2020 Poster: Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization »
Jianyu Wang · Qinghua Liu · Hao Liang · Gauri Joshi · H. Vincent Poor -
2020 Tutorial: (Track1) Federated Learning and Analytics: Industry Meets Academia »
Brendan McMahan · Virginia Smith · Peter Kairouz -
2019 Workshop: Workshop on Federated Learning for Data Privacy and Confidentiality »
Lixin Fan · Jakub Konečný · Yang Liu · Brendan McMahan · Virginia Smith · Han Yu -
2018 : Prof. Virginia Smith »
Virginia Smith