Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Spotlight
Thu Dec 06 12:40 PM -- 12:45 PM (PST) @ Room 517 CD
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Tianyi Chen · Georgios Giannakis · Tao Sun · Wotao Yin

This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation. Simple rules are designed to detect slowly-varying gradients and, therefore, trigger the reuse of outdated gradients. The resultant gradient-based algorithms are termed Lazily Aggregated Gradient --- justifying our acronym LAG used henceforth. Theoretically, the merits of this contribution are: i) the convergence rate is the same as batch gradient descent in strongly-convex, convex, and nonconvex cases; and, ii) if the distributed datasets are heterogeneous (quantified by certain measurable constants), the communication rounds needed to achieve a targeted accuracy are reduced thanks to the adaptive reuse of lagged gradients. Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives.