Timezone: »
Recent advances in distributed optimization and learning have shown that communication compression is one of the most effective means of reducing communication. While there have been many results for convergence rates with compressed communication, a lower bound is still missing.Analyses of algorithms with communication compression have identified two abstract properties that guarantee convergence: the unbiased property or the contractive property. They can be applied either unidirectionally (compressing messages from worker to server) or bidirectionally. In the smooth and non-convex stochastic regime, this paper establishes a lower bound for distributed algorithms whether using unbiased or contractive compressors in unidirection or bidirection. To close the gap between this lower bound and the best existing upper bound, we further propose an algorithm, NEOLITHIC, that almost reaches our lower bound (except for a logarithm factor) under mild conditions. Our results also show that using contractive compressors in bidirection can yield iterative methods that converge as fast as those using unbiased compressors unidirectionally. We report experimental results that validate our findings.
Author Information
Xinmeng Huang (University of Pennsylvania)
Yiming Chen (Alibaba Group)
Wotao Yin (Alibaba Group US)
Kun Yuan (Peking University)
More from the Same Authors
-
2022 : Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks »
Xinmeng Huang · Kun Yuan -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2022 Poster: Collaborative Learning of Discrete Distributions under Heterogeneity and Communication Constraints »
Xinmeng Huang · Donghwan Lee · Edgar Dobriban · Hamed Hassani -
2022 Poster: Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic Decentralized Optimization »
Kun Yuan · Xinmeng Huang · Yiming Chen · Xiaohan Zhang · Yingya Zhang · Pan Pan -
2022 Poster: Communication-Efficient Topologies for Decentralized Learning with $O(1)$ Consensus Rate »
Zhuoqing Song · Weijian Li · Kexin Jin · Lei Shi · Ming Yan · Wotao Yin · Kun Yuan -
2022 Poster: FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting »
Tian Zhou · Ziqing MA · xue wang · Qingsong Wen · Liang Sun · Tao Yao · Wotao Yin · Rong Jin -
2021 Poster: Exponential Graph is Provably Efficient for Decentralized Deep Training »
Bicheng Ying · Kun Yuan · Yiming Chen · Hanbin Hu · PAN PAN · Wotao Yin -
2021 Poster: An Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders »
Xinmeng Huang · Kun Yuan · Xianghui Mao · Wotao Yin