Skip to yearly menu bar Skip to main content


DGD^2: A Linearly Convergent Distributed Algorithm For High-dimensional Statistical Recovery

Marie Maros · Gesualdo Scutari

Hall J (level 1) #719

Keywords: [ Distributed Optimization ] [ Convex Optimization ] [ linear convergence ] [ high-dimensional statistics ]


We study linear regression from data distributed over a network of agents (with no master node) under high-dimensional scaling, which allows the ambient dimension to grow faster than the sample size. We propose a novel decentralization of the projected gradient algorithm whereby agents iteratively update their local estimates by a “double-mixing” mechanism, which suitably combines averages of iterates and gradients of neighbouring nodes. Under standard assumptions on the statistical model and network connectivity, the proposed method enjoys global linear convergence up to the statistical precision of the model. This improves on guarantees of (plain) DGD algorithms, whose iteration complexity grows undesirably with the ambient dimension. Our technical contribution is a novel convergence analysis that resembles (albeit different) algorithmic stability arguments extended to high-dimensions and distributed setting, which is of independent interest.

Chat is not available.