Timezone: »
Stochastic and adversarial data are two widely studied settings in online learning. But many optimizationtasks are neither i.i.d. nor fully adversarial, which makes it of fundamental interest to get a better theoretical understanding of the world between these extremes. In this work we establish novel regret bounds for online convex optimization in a setting that interpolates between stochastic i.i.d. and fully adversarial losses. By exploiting smoothness of the expected losses, these bounds replace a dependence on the maximum gradient length by the variance of the gradients, which was previously known only for linear losses. In addition, they weaken the i.i.d. assumption by allowing, for example, adversarially poisoned rounds, which were previously considered in the expert and bandit setting. Our results extend this to the online convex optimization framework. In the fully i.i.d. case, our bounds match the rates one would expect from results in stochastic acceleration, and in the fully adversarial case they gracefully deteriorate to match the minimax regret. We further provide lower bounds showing that our regret upper bounds aretight for all intermediate regimes in terms of the stochastic variance and theadversarial variation of the loss gradients.
Author Information
Sarah Sachs (University of Amsterdam)
Hedi Hadiji (University of Amsterdam)
Tim van Erven (University of Amsterdam)
Cristóbal Guzmán (PUC-Chile)
More from the Same Authors
-
2022 : Contributed Talks 3 »
Cristóbal Guzmán · Fangshuo Liao · Vishwak Srinivasan · Zhiyuan Li -
2022 Workshop: OPT 2022: Optimization for Machine Learning »
Courtney Paquette · Sebastian Stich · Quanquan Gu · Cristóbal Guzmán · John Duchi -
2022 Poster: Stochastic Halpern Iteration with Variance Reduction for Stochastic Monotone Inclusions »
Xufeng Cai · Chaobing Song · Cristóbal Guzmán · Jelena Diakonikolas -
2022 Poster: Differentially Private Generalized Linear Models Revisited »
Raman Arora · Raef Bassily · Cristóbal Guzmán · Michael Menart · Enayat Ullah -
2021 : Q&A with Cristóbal Guzmán »
Cristóbal Guzmán -
2021 : Non-Euclidean Differentially Private Stochastic Convex Optimization, Cristóbal Guzmán »
Cristóbal Guzmán -
2021 Poster: Best-case lower bounds in online learning »
Cristóbal Guzmán · Nishant Mehta · Ali Mortazavi -
2021 Poster: Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings »
Raef Bassily · Cristóbal Guzmán · Michael Menart -
2020 Poster: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2020 Spotlight: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2016 Poster: Combining Adversarial Guarantees and Stochastic Fast Rates in Online Learning »
Wouter Koolen · Peter Grünwald · Tim van Erven -
2016 Poster: MetaGrad: Multiple Learning Rates in Online Learning »
Tim van Erven · Wouter Koolen -
2016 Oral: MetaGrad: Multiple Learning Rates in Online Learning »
Tim van Erven · Wouter Koolen -
2015 : Discussion Panel »
Tim van Erven · Wouter Koolen · Peter Grünwald · Shai Ben-David · Dylan Foster · Satyen Kale · Gergely Neu -
2015 : Learning Faster from Easy Data II: Introduction »
Tim van Erven -
2015 Workshop: Learning Faster from Easy Data II »
Tim van Erven · Wouter Koolen -
2014 Poster: Learning the Learning Rate for Prediction with Expert Advice »
Wouter M Koolen · Tim van Erven · Peter Grünwald -
2013 Workshop: Learning Faster From Easy Data »
Peter Grünwald · Wouter M Koolen · Sasha Rakhlin · Nati Srebro · Alekh Agarwal · Karthik Sridharan · Tim van Erven · Sebastien Bubeck -
2012 Poster: Mixability in Statistical Learning »
Tim van Erven · Peter Grünwald · Mark Reid · Robert Williamson -
2011 Poster: Adaptive Hedge »
Tim van Erven · Peter Grünwald · Wouter M Koolen · Steven D Rooij