Timezone: »

 
Poster
Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Youjie Li · Mingchao Yu · Songze Li · Salman Avestimehr · Nam Sung Kim · Alex Schwing

Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #132

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4x compared to conventional approaches.

Author Information

Youjie Li (UIUC)
Mingchao Yu (University of Southern California)
Songze Li (University of Southern California)
Salman Avestimehr (University of Southern California)
Nam Sung Kim (University of Illinois at Urbana-Champaign)
Alex Schwing (University of Illinois at Urbana-Champaign)

More from the Same Authors