Skip to yearly menu bar Skip to main content


Poster

Communication-efficient Distributed SGD with Sketching

Nikita Ivkin · Daniel Rothchild · Enayat Ullah · Vladimir Braverman · Ion Stoica · Raman Arora

East Exhibition Hall B + C #81

Keywords: [ Applications ] [ Communication- or Memory-Bounded Learning ] [ Large Scale Learning ] [ Algorithms ]


Abstract: Large-scale distributed training of neural networks is often limited by network bandwidth, wherein the communication time overwhelms the local computation time. Motivated by the success of sketching methods in sub-linear/streaming algorithms, we introduce Sketched-SGD, an algorithm for carrying out distributed SGD by communicating sketches instead of full gradients. We show that \ssgd has favorable convergence rates on several classes of functions. When considering all communication -- both of gradients and of updated model weights -- Sketched-SGD reduces the amount of communication required compared to other gradient compression methods from $\mathcal{O}(d)$ or $\mathcal{O}(W)$ to $\mathcal{O}(\log d)$, where $d$ is the number of model parameters and $W$ is the number of workers participating in training. We run experiments on a transformer model, an LSTM, and a residual network, demonstrating up to a 40x reduction in total communication cost with no loss in final model performance. We also show experimentally that Sketched-SGD scales to at least 256 workers without increasing communication cost or degrading model performance.

Live content is unavailable. Log in and register to view live content