Timezone: »

Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks
Xinmeng Huang · Kun Yuan
Event URL: https://openreview.net/forum?id=kV2j-XTGAL0 »

Decentralized optimization with time-varying networks is an emerging paradigm in machine learning. It saves remarkable communication overhead in large-scale deep training and is more robust in wireless scenarios especially when nodes are moving. Federated learning can also be regarded as decentralized optimization with time-varying communication patterns alternating between global averaging and local updates.While numerous studies exist to clarify its theoretical limits and develop efficient algorithms, it remains unclear what the optimal complexity is for non-convex decentralized stochastic optimization over time-varying networks. The main difficulties lie in how to gauge the effectiveness when transmitting messages between two nodes via time-varying communications, and how to establish the lower bound when the network size is fixed (which is a prerequisite in stochastic optimization). This paper resolves these challenges and establish the first lower bound complexity. We also develop a new decentralized algorithm to nearly attain the lower bound, showing the tightness of the lower bound and the optimality of our algorithm.

Author Information

Xinmeng Huang (University of Pennsylvania)
Kun Yuan (Alibaba Inc.)

More from the Same Authors