Skip to yearly menu bar Skip to main content


Poster

Sketching for Distributed Deep Learning: A Sharper Analysis

Mayank Shrivastava · Berivan Isik · Qiaobo Li · Sanmi Koyejo · Arindam Banerjee

West Ballroom A-D #6004
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The high communication cost between the server and the clients is a significant bottleneck in scaling distributed learning for overparametrized deep models. One popular approach for reducing this communication overhead is randomized sketching. However, existing theoretical analyses for sketching-based distributed learning (sketch-DL) either incur a prohibitive dependence on the ambient dimension or need additional restrictive assumptions such as heavy-hitters. Nevertheless, despite existing pessimistic analyses, empirical evidence suggests that sketch-DL is competitive with its uncompressed counterpart -- thus motivating a sharper analysis. In this work, we introduce a sharper ambient dimension-independent convergence analysis for sketch-DL using the second-order geometry specified by the loss Hessian. Our results imply ambient dimension-independent communication complexity for sketch-DL. We present empirical results both on the loss Hessian and overall accuracy of sketch-DL supporting our theoretical results. Taken together, our results provide theoretical justification for the observed empirical success of sketch-DL.

Live content is unavailable. Log in and register to view live content