Timezone: »

LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Tianyi Chen · Georgios Giannakis · Tao Sun · Wotao Yin

Thu Dec 06 02:00 PM -- 04:00 PM (PST) @ Room 210 #8

This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation. Simple rules are designed to detect slowly-varying gradients and, therefore, trigger the reuse of outdated gradients. The resultant gradient-based algorithms are termed Lazily Aggregated Gradient --- justifying our acronym LAG used henceforth. Theoretically, the merits of this contribution are: i) the convergence rate is the same as batch gradient descent in strongly-convex, convex, and nonconvex cases; and, ii) if the distributed datasets are heterogeneous (quantified by certain measurable constants), the communication rounds needed to achieve a targeted accuracy are reduced thanks to the adaptive reuse of lagged gradients. Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives.

Author Information

Tianyi Chen (University of Minnesota)
Georgios Giannakis (University of Minnesota)
Tao Sun (National university of defense technology)

College of Science, National University of Defense Technology, PRC.

Wotao Yin (University of California, Los Angeles)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors