Timezone: »

Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni · Jialei Wang · Ji Liu · Tong Zhang

Tue Dec 04 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #158
Modern large-scale machine learning applications require stochastic optimization algorithms to be implemented on distributed computational architectures. A key bottleneck is the communication overhead for exchanging information such as stochastic gradients among different workers. In this paper, to reduce the communication cost, we propose a convex optimization formulation to minimize the coding length of stochastic gradients. The key idea is to randomly drop out coordinates of the stochastic gradient vectors and amplify the remaining coordinates appropriately to ensure the sparsified gradient to be unbiased. To solve the optimal sparsification efficiently, several simple and fast algorithms are proposed for an approximate solution, with a theoretical guarantee for sparseness. Experiments on $\ell_2$ regularized logistic regression, support vector machines, and convolutional neural networks validate our sparsification approaches.

Author Information

Jianqiao Wangni (University of Pennsylvania)
Jialei Wang (Two Sigma Investments, University of Chicago)
Ji Liu (University of Rochester, Tencent AI lab)
Tong Zhang (Tencent AI Lab)

More from the Same Authors