Timezone: »

 
Poster
Communication-efficient Distributed SGD with Sketching
Nikita Ivkin · Daniel Rothchild · Enayat Ullah · Vladimir Braverman · Ion Stoica · Raman Arora

Tue Dec 10 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #81
Large-scale distributed training of neural networks is often limited by network bandwidth, wherein the communication time overwhelms the local computation time. Motivated by the success of sketching methods in sub-linear/streaming algorithms, we introduce Sketched-SGD, an algorithm for carrying out distributed SGD by communicating sketches instead of full gradients. We show that \ssgd has favorable convergence rates on several classes of functions. When considering all communication -- both of gradients and of updated model weights -- Sketched-SGD reduces the amount of communication required compared to other gradient compression methods from $\mathcal{O}(d)$ or $\mathcal{O}(W)$ to $\mathcal{O}(\log d)$, where $d$ is the number of model parameters and $W$ is the number of workers participating in training. We run experiments on a transformer model, an LSTM, and a residual network, demonstrating up to a 40x reduction in total communication cost with no loss in final model performance. We also show experimentally that Sketched-SGD scales to at least 256 workers without increasing communication cost or degrading model performance.

Author Information

Nikita Ivkin (Amazon)
Daniel Rothchild (UC Berkeley)
Enayat Ullah (Johns Hopkins University)
Vladimir Braverman (Johns Hopkins University)
Ion Stoica (UC Berkeley)
Raman Arora (Johns Hopkins University)

More from the Same Authors