Timezone: »
Poster
Benign Underfitting of Stochastic Gradient Descent
Tomer Koren · Roi Livni · Yishay Mansour · Uri Sherman
We study to what extent may stochastic gradient descent (SGD) be understood as a ``conventional'' learning rule that achieves generalization performance by obtaining a good fit to training data. We consider the fundamental stochastic convex optimization framework, where (one pass, $\textit{without}$-replacement) SGD is classically known to minimize the population risk at rate $O(1/\sqrt n)$, and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of $\Omega(1)$. Consequently, it turns out that SGD is not algorithmically stable in $\textit{any}$ sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related $\textit{with}$-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results.
Author Information
Tomer Koren (Tel Aviv University & Google)
Roi Livni (Tel Aviv University)
Yishay Mansour (Tel Aviv University & Google)
Uri Sherman (Tel Aviv University)
More from the Same Authors
-
2021 Spotlight: Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations »
Ayush Sekhari · Christoph Dann · Mehryar Mohri · Yishay Mansour · Karthik Sridharan -
2021 Spotlight: Littlestone Classes are Privately Online Learnable »
Noah Golowich · Roi Livni -
2022 : A Theory of Learning with Competing Objectives and User Feedback »
Pranjal Awasthi · Corinna Cortes · Yishay Mansour · Mehryar Mohri -
2022 : A Theory of Learning with Competing Objectives and User Feedback »
Pranjal Awasthi · Corinna Cortes · Yishay Mansour · Mehryar Mohri -
2022 : Finding Safe Zones of Markov Decision Processes Policies »
Michal Moshkovitz · Lee Cohen · Yishay Mansour -
2023 Poster: Tight Risk Bounds for Gradient Descent on Separable Data »
Matan Schliserman · Tomer Koren -
2023 Poster: Information Theoretic Lower Bounds for Information Theoretic Upper Bounds »
Roi Livni -
2022 : A Theory of Learning with Competing Objectives and User Feedback »
Pranjal Awasthi · Corinna Cortes · Yishay Mansour · Mehryar Mohri -
2022 Poster: A Characterization of Semi-Supervised Adversarially Robust PAC Learnability »
Idan Attias · Steve Hanneke · Yishay Mansour -
2022 Poster: Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization »
Idan Amir · Roi Livni · Nati Srebro -
2022 Poster: Rate-Optimal Online Convex Optimization in Adaptive Linear Control »
Asaf Benjamin Cassel · Alon Peled-Cohen · Tomer Koren -
2022 Poster: Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback »
Tiancheng Jin · Tal Lancewicki · Haipeng Luo · Yishay Mansour · Aviv Rosenberg -
2022 Poster: Fair Wrapping for Black-box Predictions »
Alexander Soen · Ibrahim Alabdulmohsin · Sanmi Koyejo · Yishay Mansour · Nyalleng Moorosi · Richard Nock · Ke Sun · Lexing Xie -
2022 Poster: Better Best of Both Worlds Bounds for Bandits with Switching Costs »
Idan Amir · Guy Azov · Tomer Koren · Roi Livni -
2021 Poster: Algorithmic Instabilities of Accelerated Gradient Descent »
Amit Attia · Tomer Koren -
2021 Poster: Minimax Regret for Stochastic Shortest Path »
Alon Cohen · Yonathan Efroni · Yishay Mansour · Aviv Rosenberg -
2021 Oral: Optimal Rates for Random Order Online Optimization »
Uri Sherman · Tomer Koren · Yishay Mansour -
2021 Poster: Towards Best-of-All-Worlds Online Learning with Feedback Graphs »
Liad Erez · Tomer Koren -
2021 Poster: Never Go Full Batch (in Stochastic Convex Optimization) »
Idan Amir · Yair Carmon · Tomer Koren · Roi Livni -
2021 Poster: Littlestone Classes are Privately Online Learnable »
Noah Golowich · Roi Livni -
2021 Poster: Optimal Rates for Random Order Online Optimization »
Uri Sherman · Tomer Koren · Yishay Mansour -
2021 Poster: Oracle-Efficient Regret Minimization in Factored MDPs with Unknown Structure »
Aviv Rosenberg · Yishay Mansour -
2021 Poster: Asynchronous Stochastic Optimization Robust to Arbitrary Delays »
Alon Cohen · Amit Daniely · Yoel Drori · Tomer Koren · Mariano Schain -
2021 Poster: Differentially Private Multi-Armed Bandits in the Shuffle Model »
Jay Tenenbaum · Haim Kaplan · Yishay Mansour · Uri Stemmer -
2021 Poster: ROI Maximization in Stochastic Online Decision-Making »
Nicolò Cesa-Bianchi · Tom Cesari · Yishay Mansour · Vianney Perchet -
2021 Poster: Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations »
Ayush Sekhari · Christoph Dann · Mehryar Mohri · Yishay Mansour · Karthik Sridharan -
2021 Poster: Dueling Bandits with Team Comparisons »
Lee Cohen · Ulrike Schmidt-Kraepelin · Yishay Mansour -
2020 Poster: Bandit Linear Control »
Asaf Benjamin Cassel · Tomer Koren -
2020 Poster: Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study »
Assaf Dauber · Meir Feder · Tomer Koren · Roi Livni -
2020 Spotlight: Bandit Linear Control »
Asaf Benjamin Cassel · Tomer Koren -
2020 Poster: Prediction with Corrupted Expert Advice »
Idan Amir · Idan Attias · Tomer Koren · Yishay Mansour · Roi Livni -
2020 Poster: Stochastic Optimization with Laggard Data Pipelines »
Naman Agarwal · Rohan Anil · Tomer Koren · Kunal Talwar · Cyril Zhang -
2020 Spotlight: Prediction with Corrupted Expert Advice »
Idan Amir · Idan Attias · Tomer Koren · Yishay Mansour · Roi Livni -
2020 Poster: Synthetic Data Generators -- Sequential and Private »
Olivier Bousquet · Roi Livni · Shay Moran -
2020 Poster: A Limitation of the PAC-Bayes Framework »
Roi Livni · Shay Moran -
2019 : Private Stochastic Convex Optimization: Optimal Rates in Linear Time »
Vitaly Feldman · Tomer Koren · Kunal Talwar -
2019 Poster: Memory Efficient Adaptive Optimization »
Rohan Anil · Vineet Gupta · Tomer Koren · Yoram Singer -
2019 Poster: Robust Bi-Tempered Logistic Loss Based on Bregman Divergences »
Ehsan Amid · Manfred K. Warmuth · Rohan Anil · Tomer Koren -
2019 Poster: Graph-based Discriminators: Sample Complexity and Expressiveness »
Roi Livni · Yishay Mansour -
2019 Spotlight: Graph-based Discriminators: Sample Complexity and Expressiveness »
Roi Livni · Yishay Mansour -
2017 Workshop: Learning in the Presence of Strategic Behavior »
Nika Haghtalab · Yishay Mansour · Tim Roughgarden · Vasilis Syrgkanis · Jennifer Wortman Vaughan -
2017 Poster: Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues »
Noga Alon · Moshe Babaioff · Yannai A. Gonczarowski · Yishay Mansour · Shay Moran · Amir Yehudayoff -
2017 Poster: Affine-Invariant Online Optimization and the Low-rank Experts Problem »
Tomer Koren · Roi Livni -
2017 Spotlight: Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues »
Noga Alon · Moshe Babaioff · Yannai A. Gonczarowski · Yishay Mansour · Shay Moran · Amir Yehudayoff -
2017 Poster: Multi-Armed Bandits with Metric Movement Costs »
Tomer Koren · Roi Livni · Yishay Mansour