Skip to yearly menu bar Skip to main content


Poster

Tight Dimension Independent Lower Bound on the Expected Convergence Rate for Diminishing Step Sizes in SGD

Ha Nguyen · Lam Nguyen · Marten van Dijk

East Exhibition Hall B, C #114

Keywords: [ Convex Optimization ] [ Optimization ]


Abstract: We study the convergence of Stochastic Gradient Descent (SGD) for strongly convex objective functions. We prove for all t a lower bound on the expected convergence rate after the t-th SGD iteration; the lower bound is over all possible sequences of diminishing step sizes. It implies that recently proposed sequences of step sizes at ICML 2018 and ICML 2019 are {\em universally} close to optimal in that the expected convergence rate after {\em each} iteration is within a factor 32 of our lower bound. This factor is independent of dimension d. We offer a framework for comparing with lower bounds in state-of-the-art literature and when applied to SGD for strongly convex objective functions our lower bound is a significant factor 775d larger compared to existing work.

Live content is unavailable. Log in and register to view live content