Timezone: »
Communication cost is one major bottleneck for the scalability for distributed learning. One approach to reduce the communication cost is to compress the gradient during communication. However, directly compressing the gradient decelerates the convergence speed, and the resulting algorithm may diverge for biased compression. Recent work addressed this problem for stochastic gradient descent by adding back the compression error from the previous step. This idea was further extended to one class of variance reduced algorithms, where the variance of the stochastic gradient is reduced by taking a moving average over all history gradients. However, our analysis shows that just adding the previous step's compression error, as done in existing work, does not fully compensate the compression error. So, we propose ErrorCompensateX, which uses the compression error from the previous two steps. We show that ErrorCompensateX can achieve the same asymptotic convergence rate with the training without compression. Moreover, we provide a unified theoretical analysis framework for this class of variance reduced algorithms, with or without error compensation.
Author Information
Hanlin Tang (University of Rochester)
Yao Li (Michigan State University)
Ji Liu (Kwai Inc.)
Ming Yan (Michigan State University)
More from the Same Authors
-
2022 Poster: Improving Certified Robustness via Statistical Learning with Logical Reasoning »
Zhuolin Yang · Zhikuan Zhao · Boxin Wang · Jiawei Zhang · Linyi Li · Hengzhi Pei · Bojan Karlaš · Ji Liu · Heng Guo · Ce Zhang · Bo Li -
2021 Poster: TNASP: A Transformer-based NAS Predictor with a Self-evolution Framework »
Shun Lu · Jixiang Li · Jianchao Tan · Sen Yang · Ji Liu -
2021 Poster: Shifted Chunk Transformer for Spatio-Temporal Representational Learning »
Xuefan Zha · Wentao Zhu · Lv Xun · Sen Yang · Ji Liu -
2020 Poster: Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free »
Haotao Wang · Tianlong Chen · Shupeng Gui · TingKuei Hu · Ji Liu · Zhangyang Wang -
2019 Poster: Manifold denoising by Nonlinear Robust Principal Component Analysis »
He Lyu · Ningyu Sha · Shuyang Qin · Ming Yan · Yuying Xie · Rongrong Wang -
2019 Poster: Efficient Smooth Non-Convex Stochastic Compositional Optimization via Stochastic Recursive Gradient Descent »
Wenqing Hu · Chris Junchi Li · Xiangru Lian · Ji Liu · Angela Yuan -
2019 Poster: Global Sparse Momentum SGD for Pruning Very Deep Neural Networks »
Xiaohan Ding · guiguang ding · Xiangxin Zhou · Yuchen Guo · Jungong Han · Ji Liu -
2019 Poster: LIIR: Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning »
Yali Du · Lei Han · Meng Fang · Ji Liu · Tianhong Dai · Dacheng Tao -
2019 Poster: Model Compression with Adversarial Robustness: A Unified Optimization Framework »
Shupeng Gui · Haotao Wang · Haichuan Yang · Chen Yu · Zhangyang Wang · Ji Liu -
2018 Poster: Communication Compression for Decentralized Training »
Hanlin Tang · Shaoduo Gan · Ce Zhang · Tong Zhang · Ji Liu -
2018 Poster: Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity »
Conghui Tan · Tong Zhang · Shiqian Ma · Ji Liu -
2018 Poster: Gradient Sparsification for Communication-Efficient Distributed Optimization »
Jianqiao Wangni · Jialei Wang · Ji Liu · Tong Zhang -
2017 Poster: Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Ce Zhang · Huan Zhang · Cho-Jui Hsieh · Wei Zhang · Ji Liu -
2017 Oral: Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Ce Zhang · Huan Zhang · Cho-Jui Hsieh · Wei Zhang · Ji Liu -
2016 Poster: Asynchronous Parallel Greedy Coordinate Descent »
Yang You · Xiangru Lian · Ji Liu · Hsiang-Fu Yu · Inderjit Dhillon · James Demmel · Cho-Jui Hsieh -
2016 Poster: Accelerating Stochastic Composition Optimization »
Mengdi Wang · Ji Liu · Ethan Fang -
2016 Poster: A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order »
Xiangru Lian · Huan Zhang · Cho-Jui Hsieh · Yijun Huang · Ji Liu -
2015 Poster: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization »
Xiangru Lian · Yijun Huang · Yuncheng Li · Ji Liu -
2015 Spotlight: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization »
Xiangru Lian · Yijun Huang · Yuncheng Li · Ji Liu -
2014 Poster: Exclusive Feature Learning on Arbitrary Structures via $\ell_{1,2}$-norm »
Deguang Kong · Ryohei Fujimaki · Ji Liu · Feiping Nie · Chris Ding -
2013 Poster: An Approximate, Efficient LP Solver for LP Rounding »
Srikrishna Sridhar · Stephen Wright · Christopher Re · Ji Liu · Victor Bittorf · Ce Zhang -
2012 Poster: Regularized Off-Policy TD-Learning »
Bo Liu · Sridhar Mahadevan · Ji Liu -
2012 Spotlight: Regularized Off-Policy TD-Learning »
Bo Liu · Sridhar Mahadevan · Ji Liu -
2010 Poster: Multi-Stage Dantzig Selector »
Ji Liu · Peter Wonka · Jieping Ye