Timezone: »
Poster
Communication-Efficient Topologies for Decentralized Learning with $O(1)$ Consensus Rate
Zhuoqing Song · Weijian Li · Kexin Jin · Lei Shi · Ming Yan · Wotao Yin · Kun Yuan
Decentralized optimization is an emerging paradigm in distributed learning in which agents achieve network-wide solutions by peer-to-peer communication without the central server. Since communication tends to be slower than computation, when each agent communicates with only a few neighboring agents per iteration, they can complete iterations faster than with more agents or a central server. However, the total number of iterations to reach a network-wide solution is affected by the speed at which the information of the agents is ``mixed'' by communication. We found that popular communication topologies either have large degrees (such as stars and complete graphs) or are ineffective at mixing information (such as rings and grids). To address this problem, we propose a new family of topologies, EquiTopo, which has an (almost) constant degree and network-size-independent consensus rate which is used to measure the mixing efficiency.In the proposed family, EquiStatic has a degree of $\Theta(\ln(n))$, where $n$ is the network size, and a series of time-varying one-peer topologies, EquiDyn, has a constant degree of 1. We generate EquiDyn through a certain random sampling procedure. Both of them achieve $n$-independent consensus rate. We apply them to decentralized SGD and decentralized gradient tracking and obtain faster communication and better convergence, both theoretically and empirically. Our code is implemented through BlueFog and available at https://github.com/kexinjinnn/EquiTopo.
Author Information
Zhuoqing Song (Fudan University)
Weijian Li (Damo Academy, Alibaba Group)
Kexin Jin (Princeton University)
Lei Shi (Fudan University)
Ming Yan (The Chinese University of Hong Kong, Shenzhen)
Wotao Yin (Alibaba Group US)
Kun Yuan (Peking University)
More from the Same Authors
-
2022 Poster: Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic Decentralized Optimization »
Kun Yuan · Xinmeng Huang · Yiming Chen · Xiaohan Zhang · Yingya Zhang · Pan Pan -
2022 Poster: FedRolex: Model-Heterogeneous Federated Learning with Rolling Sub-Model Extraction »
Samiul Alam · Luyang Liu · Ming Yan · Mi Zhang -
2022 Poster: Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression »
Xinmeng Huang · Yiming Chen · Wotao Yin · Kun Yuan -
2022 Poster: FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting »
Tian Zhou · Ziqing MA · xue wang · Qingsong Wen · Liang Sun · Tao Yao · Wotao Yin · Rong Jin