`

Timezone: »

 
Poster
Error Compensated Distributed SGD Can Be Accelerated
Xun Qian · Peter Richtarik · Tong Zhang

Thu Dec 09 12:30 AM -- 02:00 AM (PST) @ None #None

Gradient compression is a recent and increasingly popular technique for reducing the communication cost in distributed training of large-scale machine learning models. In this work we focus on developing efficient distributed methods that can work for any compressor satisfying a certain contraction property, which includes both unbiased (after appropriate scaling) and biased compressors such as RandK and TopK. Applied naively, gradient compression introduces errors that either slow down convergence or lead to divergence. A popular technique designed to tackle this issue is error compensation/error feedback. Due to the difficulties associated with analyzing biased compressors, it is not known whether gradient compression with error compensation can be combined with acceleration. In this work, we show for the first time that error compensated gradient compression methods can be accelerated. In particular, we propose and study the error compensated loopless Katyusha method, and establish an accelerated linear convergence rate under standard assumptions. We show through numerical experiments that the proposed method converges with substantially fewer communication rounds than previous error compensated algorithms.

Author Information

Xun Qian (KAUST)
Peter Richtarik (KAUST)
Tong Zhang (The Hong Kong University of Science and Technology)

More from the Same Authors