Timezone: »
We consider a standard federated learning (FL) setup where a group of clients periodically coordinate with a central server to train a statistical model. We develop a general algorithmic framework called FedLin to tackle some of the key challenges intrinsic to FL, namely objective heterogeneity, systems heterogeneity, and infrequent and imprecise communication. Our framework is motivated by the observation that under these challenges, various existing FL algorithms suffer from a fundamental speed-accuracy conflict: they either guarantee linear convergence but to an incorrect point, or convergence to the global minimum but at a sub-linear rate, i.e., fast convergence comes at the expense of accuracy. In contrast, when the clients' local loss functions are smooth and strongly convex, we show that FedLin guarantees linear convergence to the global minimum, despite arbitrary objective and systems heterogeneity. We then establish matching upper and lower bounds on the convergence rate of FedLin that highlight the effects of infrequent, periodic communication. Finally, we show that FedLin preserves linear convergence rates under aggressive gradient sparsification, and quantify the effect of the compression level on the convergence rate. Notably, our work is the first to provide tight linear convergence rate guarantees, and constitutes the first comprehensive analysis of gradient sparsification in FL.
Author Information
Aritra Mitra (University of Pennsylvania)
Rayana Jaafar (University of Pennsylvania)
George J. Pappas (University of Pennsylvania)
George J. Pappas is the UPS Foundation Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is member of the GRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, the O. Hugo Schuck Best Paper Award, the National Science Foundation PECASE, and the George H. Heilmeier Faculty Excellence Award.
Hamed Hassani (ETH Zurich)
More from the Same Authors
-
2022 Spotlight: Learning Operators with Coupled Attention »
Georgios Kissas · Jacob Seidman · Leonardo Ferreira Guilhoto · Victor M. Preciado · George J. Pappas · Paris Perdikaris -
2022 Poster: NOMAD: Nonlinear Manifold Decoders for Operator Learning »
Jacob Seidman · Georgios Kissas · Paris Perdikaris · George J. Pappas -
2022 Poster: Learning Operators with Coupled Attention »
Georgios Kissas · Jacob Seidman · Leonardo Ferreira Guilhoto · Victor M. Preciado · George J. Pappas · Paris Perdikaris -
2022 Poster: Probable Domain Generalization via Quantile Risk Minimization »
Cian Eastwood · Alexander Robey · Shashank Singh · Julius von Kügelgen · Hamed Hassani · George J. Pappas · Bernhard Schölkopf -
2022 Poster: Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds »
Aritra Mitra · Arman Adibi · George J. Pappas · Hamed Hassani -
2021 Poster: Model-Based Domain Generalization »
Alexander Robey · George J. Pappas · Hamed Hassani -
2021 Poster: Adversarial Robustness with Semi-Infinite Constrained Learning »
Alexander Robey · Luiz Chamon · George J. Pappas · Hamed Hassani · Alejandro Ribeiro -
2021 Poster: Safe Pontryagin Differentiable Programming »
Wanxin Jin · Shaoshuai Mou · George J. Pappas -
2019 Poster: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2019 Spotlight: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2016 Poster: Fast and Provably Good Seedings for k-Means »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2016 Oral: Fast and Provably Good Seedings for k-Means »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2015 Poster: Sampling from Probabilistic Submodular Models »
Alkis Gotovos · Hamed Hassani · Andreas Krause -
2015 Oral: Sampling from Probabilistic Submodular Models »
Alkis Gotovos · Hamed Hassani · Andreas Krause