Timezone: »
We study linear regression from data distributed over a network of agents (with no master node) under high-dimensional scaling, which allows the ambient dimension to grow faster than the sample size. We propose a novel decentralization of the projected gradient algorithm whereby agents iteratively update their local estimates by a “double-mixing” mechanism, which suitably combines averages of iterates and gradients of neighbouring nodes. Under standard assumptions on the statistical model and network connectivity, the proposed method enjoys global linear convergence up to the statistical precision of the model. This improves on guarantees of (plain) DGD algorithms, whose iteration complexity grows undesirably with the ambient dimension. Our technical contribution is a novel convergence analysis that resembles (albeit different) algorithmic stability arguments extended to high-dimensions and distributed setting, which is of independent interest.
Author Information
Marie Maros (Purdue University)
Gesualdo Scutari (Purdue University)
More from the Same Authors
-
2022 Spotlight: Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity »
Dmitry Kovalev · Aleksandr Beznosikov · Ekaterina Borodich · Alexander Gasnikov · Gesualdo Scutari -
2022 Poster: Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity »
Dmitry Kovalev · Aleksandr Beznosikov · Ekaterina Borodich · Alexander Gasnikov · Gesualdo Scutari -
2022 Poster: Acceleration in Distributed Sparse Regression »
Marie Maros · Gesualdo Scutari -
2021 Poster: Distributed Saddle-Point Problems Under Data Similarity »
Aleksandr Beznosikov · Gesualdo Scutari · Alexander Rogozin · Alexander Gasnikov