Timezone: »
Decentralized optimization is emerging as a viable alternative for scalable distributed machine learning, but also introduces new challenges in terms of synchronization costs. To this end, several communication-reduction techniques, such as non-blocking communication, quantization, and local steps, have been explored in the decentralized setting. Due to the complexity of analyzing optimization in such a relaxed setting, this line of work often assumes \emph{global} communication rounds, which require additional synchronization. In this paper, we consider decentralized optimization in the simpler, but harder to analyze, \emph{asynchronous gossip} model, in which communication occurs in discrete, randomly chosen pairings among nodes. Perhaps surprisingly, we show that a variant of SGD called \emph{SwarmSGD} still converges in this setting, even if \emph{non-blocking communication}, \emph{quantization}, and \emph{local steps} are all applied \emph{in conjunction}, and even if the node data distributions and underlying graph topology are both \emph{heterogenous}. Our analysis is based on a new connection with multi-dimensional load-balancing processes. We implement this algorithm and deploy it in a super-computing environment, showing that it can outperform previous decentralized methods in terms of end-to-end training time, and that it can even rival carefully-tuned large-batch SGD for certain tasks.
Author Information
Giorgi Nadiradze (Institute of Science and Technology Austria)
Amirmojtaba Sabour (Sharif University of Technology, Sharif University of Technology)
Peter Davies (University of Surrey)
Shigang Li (Swiss Federal Institute of Technology)
Dan Alistarh (IST Austria & NeuralMagic)
More from the Same Authors
-
2021 : SSSE: Efficiently Erasing Samples from Trained Machine Learning Models »
Alexandra Peste · Dan Alistarh · Christoph Lampert -
2022 : ASDL: A Unified Interface for Gradient Preconditioning in PyTorch »
Kazuki Osawa · Satoki Ishikawa · Rio Yokota · Shigang Li · Torsten Hoefler -
2022 Poster: Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning »
Elias Frantar · Dan Alistarh -
2021 Poster: M-FAC: Efficient Matrix-Free Approximations of Second-Order Information »
Elias Frantar · Eldar Kurtic · Dan Alistarh -
2021 Poster: Distributed Principal Component Analysis with Limited Communication »
Foivos Alimisis · Peter Davies · Bart Vandereycken · Dan Alistarh -
2021 Poster: Towards Tight Communication Lower Bounds for Distributed Optimisation »
Janne H. Korhonen · Dan Alistarh -
2021 Poster: AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks »
Alexandra Peste · Eugenia Iofinova · Adrian Vladu · Dan Alistarh -
2019 Poster: Powerset Convolutional Neural Networks »
Chris Wendler · Markus Püschel · Dan Alistarh -
2017 Poster: QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding »
Dan Alistarh · Demjan Grubic · Jerry Li · Ryota Tomioka · Milan Vojnovic -
2017 Spotlight: Communication-Efficient Stochastic Gradient Descent, with Applications to Neural Networks »
Dan Alistarh · Demjan Grubic · Jerry Li · Ryota Tomioka · Milan Vojnovic