Timezone: »
In personalized federated learning, each member of a potentially large set of agents aims to train a model minimizing its loss function averaged over its local data distribution. We study this problem under the lens of stochastic optimization, focusing on a scenario with a large number of agents, that each possess very few data samples from their local data distribution. Specifically, we prove novel matching lower and upper bounds on the number of samples required from all agents to approximately minimize the generalization error of a fixed agent. We provide strategies matching these lower bounds, based on a gradient filtering approach: given prior knowledge on some notion of distance between local data distributions, agents filter and aggregate stochastic gradients received from other agents, in order to achieve an optimal bias-variance trade-off. Finally, we quantify the impact of using rough estimations of the distances between local distributions of agents, based on a very small number of local samples.
Author Information
Mathieu Even (INRIA)
Laurent Massoulié (Inria)
Kevin Scaman (INRIA Paris)
More from the Same Authors
-
2022 : Asynchronous speedup in decentralized optimization »
Mathieu Even · Hadrien Hendrikx · Laurent Massoulié -
2022 : Periodic Signal Recovery with Regularized Sine Neural Networks »
David A. R. Robin · Kevin Scaman · marc lelarge -
2022 Poster: Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays »
Konstantin Mishchenko · Francis Bach · Mathieu Even · Blake Woodworth -
2022 Poster: Convergence beyond the over-parameterized regime using Rayleigh quotients »
David A. R. Robin · Kevin Scaman · marc lelarge -
2022 Poster: Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging »
Edwige Cyffers · Mathieu Even · Aurélien Bellet · Laurent Massoulié -
2021 Oral: Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms »
Mathieu Even · Raphaël Berthier · Francis Bach · Nicolas Flammarion · Hadrien Hendrikx · Pierre Gaillard · Laurent Massoulié · Adrien Taylor -
2021 Poster: Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms »
Mathieu Even · Raphaël Berthier · Francis Bach · Nicolas Flammarion · Hadrien Hendrikx · Pierre Gaillard · Laurent Massoulié · Adrien Taylor -
2020 Poster: Dual-Free Stochastic Decentralized Optimization with Variance Reduction »
Hadrien Hendrikx · Francis Bach · Laurent Massoulié -
2019 Poster: An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums »
Hadrien Hendrikx · Francis Bach · Laurent Massoulié -
2018 Poster: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Laurent Massoulié · Yin Tat Lee -
2018 Oral: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Laurent Massoulié · Yin Tat Lee