Timezone: »

A unified variance-reduced accelerated gradient method for convex optimization
Guanghui Lan · Zhize Li · Yi Zhou

Wed Dec 11 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #153

We propose a novel randomized incremental gradient algorithm, namely, VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the conditional number, Varag exhibits the unified optimal rates of convergence for solving smooth convex finite-sum problems directly regardless of their strong convexity. Moreover, Varag is the first accelerated randomized incremental gradient method that benefits from the strong convexity of the data-fidelity term to achieve the optimal linear convergence. It also establishes an optimal linear rate of convergence for solving a wide class of problems only satisfying a certain error bound condition rather than strong convexity. Varag can also be extended to solve stochastic finite-sum problems.

Author Information

Guanghui Lan (Georgia Tech)
Zhize Li (Tsinghua University, and KAUST)

Zhize Li is a Research Scientist at the King Abdullah University of Science and Technology (KAUST) since September 2020. He obtained his PhD degree in Computer Science from Tsinghua University in 2019 (Advisor: Prof. Jian Li). He was a postdoc at KAUST (Hosted by Prof. Peter Richtárik), a visiting scholar at Duke University (Hosted by Prof. Rong Ge), and a visiting scholar at Georgia Institute of Technology (Hosted by Prof. Guanghui (George) Lan).

Yi Zhou (IBM Almaden Research Center)

More from the Same Authors