Timezone: »
We present an approach towards convex optimization that relies on a novel scheme which converts adaptive online algorithms into offline methods. In the offline optimization setting, our derived methods are shown to obtain favourable adaptive guarantees which depend on the harmonic sum of the queried gradients. We further show that our methods implicitly adapt to the objective's structure: in the smooth case fast convergence rates are ensured without any prior knowledge of the smoothness parameter, while still maintaining guarantees in the non-smooth setting. Our approach has a natural extension to the stochastic setting, resulting in a lazy version of SGD (stochastic GD), where minibathces are chosen adaptively depending on the magnitude of the gradients. Thus providing a principled approach towards choosing minibatch sizes.
Author Information
Kfir Levy (ETH)
More from the Same Authors
-
2021 Poster: Faster Neural Network Training with Approximate Tensor Operations »
Menachem Adelman · Kfir Levy · Ido Hakimi · Mark Silberstein -
2021 Poster: STORM+: Fully Adaptive SGD with Recursive Momentum for Nonconvex Optimization »
Kfir Levy · Ali Kavis · Volkan Cevher -
2017 Poster: Non-monotone Continuous DR-submodular Maximization: Structure and Algorithms »
Yatao Bian · Kfir Levy · Andreas Krause · Joachim M Buhmann