Timezone: »
The incremental aggregated gradient algorithm is popular in network optimization and machine learning research. However, the current convergence results require the objective function to be strongly convex. And the existing convergence rates are also limited to linear convergence. Due to the mathematical techniques, the stepsize in the algorithm is restricted by the strongly convex constant, which may make the stepsize be very small (the strongly convex constant may be small).
In this paper, we propose a general proximal incremental aggregated gradient algorithm, which contains various existing algorithms including the basic incremental aggregated gradient method. Better and new convergence results are proved even with the general scheme. The novel results presented in this paper, which have not appeared in previous literature, include: a general scheme, nonconvex analysis, the sublinear convergence rates of the function values, much larger stepsizes that guarantee the convergence, the convergence when noise exists, the line search strategy of the proximal incremental aggregated gradient algorithm and its convergence.
Author Information
Tao Sun (National university of defense technology)
College of Science, National University of Defense Technology, PRC.
Yuejiao Sun (University of California, Los Angeles)
Dongsheng Li (School of Computer Science, National University of Defense Technology)
Qing Liao (Harbin Institute of Technology (Shenzhen))
More from the Same Authors
-
2021 Spotlight: Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems »
Tianyi Chen · Yuejiao Sun · Wotao Yin -
2022 Poster: Finite-Time Analysis of Adaptive Temporal Difference Learning with Deep Neural Networks »
Tao Sun · Dongsheng Li · Bao Wang -
2021 Poster: Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems »
Tianyi Chen · Yuejiao Sun · Wotao Yin -
2018 Poster: LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning »
Tianyi Chen · Georgios Giannakis · Tao Sun · Wotao Yin -
2018 Spotlight: LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning »
Tianyi Chen · Georgios Giannakis · Tao Sun · Wotao Yin -
2018 Poster: On Markov Chain Gradient Descent »
Tao Sun · Yuejiao Sun · Wotao Yin -
2017 Poster: Asynchronous Coordinate Descent under More Realistic Assumptions »
Tao Sun · Robert Hannah · Wotao Yin