Timezone: »

 
Poster
Practical Low-Rank Communication Compression in Decentralized Deep Learning
Thijs Vogels · Sai Praneeth Karimireddy · Martin Jaggi

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #1131

Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors. We prove that our method does not require any additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Inspired the PowerSGD algorithm for centralized deep learning, we execute power iteration steps on model differences to maximize the information transferred per bit. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.

Author Information

Thijs Vogels (EPFL)
Sai Praneeth Karimireddy (EPFL)

I am a second year PhD student working in convex and non-convex optimization with Prof. Martin Jaggi. My focus is on designing faster and more scalable optimization algorithms for machine learning. Some of my preliminary results and problems I am currently working on- 1. Robust accelerated algorithms - Nesterov acceleration modified to be robust to noise. 2. Faster algorithms which take second order information about the function into account. 3. A $O(1/t^2)$ rate *affine invariant* algorithm for constrained optimization. 4. Frank-Wolfe algorithm for non-smooth functions using 'noisy-smoothing'

Martin Jaggi (EPFL)

More from the Same Authors