`

Timezone: »

 
Poster
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
Zhize Li · Peter Richtarik

Thu Dec 09 12:30 AM -- 02:00 AM (PST) @ Virtual #None
Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of communications (faster convergence), e.g., Nesterov's accelerated gradient descent [31, 32] and Adam [14]. In order to combine the benefits of communication compression and convergence acceleration, we propose a \emph{compressed and accelerated} gradient method based on ANITA [20] for distributed optimization, which we call CANITA. Our CANITA achieves the \emph{first accelerated rate} $O\bigg(\sqrt{\Big(1+\sqrt{\frac{\omega^3}{n}}\Big)\frac{L}{\epsilon}} + \omega\big(\frac{1}{\epsilon}\big)^{\frac{1}{3}}\bigg)$, which improves upon the state-of-the-art non-accelerated rate $O\left((1+\frac{\omega}{n})\frac{L}{\epsilon} + \frac{\omega^2+\omega}{\omega+n}\frac{1}{\epsilon}\right)$ of DIANA [12] for distributed general convex problems, where $\epsilon$ is the target error, $L$ is the smooth parameter of the objective, $n$ is the number of machines/devices, and $\omega$ is the compression parameter (larger $\omega$ means more compression can be applied, and no compression implies $\omega=0$). Our results show that as long as the number of devices $n$ is large (often true in distributed/federated learning), or the compression $\omega$ is not very high, CANITA achieves the faster convergence rate $O\Big(\sqrt{\frac{L}{\epsilon}}\Big)$, i.e., the number of communication rounds is $O\Big(\sqrt{\frac{L}{\epsilon}}\Big)$ (vs. $O\big(\frac{L}{\epsilon}\big)$ achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds).

Author Information

Zhize Li (King Abdullah University of Science and Technology (KAUST))

Zhize Li is a Research Scientist at the King Abdullah University of Science and Technology (KAUST) since September 2020. He obtained his PhD degree in Computer Science from Tsinghua University in 2019 (Advisor: Prof. Jian Li). He was a postdoc at KAUST (Hosted by Prof. Peter Richtárik), a visiting scholar at Duke University (Hosted by Prof. Rong Ge), and a visiting scholar at Georgia Institute of Technology (Hosted by Prof. Guanghui (George) Lan).

Peter Richtarik (KAUST)

More from the Same Authors