Timezone: »
The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms. This notion refers to the tendency of the optimization algorithm towards a certain structured solution that often generalizes well. Recently, several papers have studied implicit regularization and were able to identify this phenomenon in various scenarios.
We revisit this paradigm in arguably the simplest non-trivial setup, and study the implicit bias of Stochastic Gradient Descent (SGD) in the context of Stochastic Convex Optimization. As a first step, we provide a simple construction that rules out the existence of a \emph{distribution-independent} implicit regularizer that governs the generalization ability of SGD. We then demonstrate a learning problem that rules out a very general class of \emph{distribution-dependent} implicit regularizers from explaining generalization, which includes strongly convex regularizers as well as non-degenerate norm-based regularizations. Certain aspects of our constructions point out to significant difficulties in providing a comprehensive explanation of an algorithm's generalization performance by solely arguing about its implicit regularization properties.
Author Information
Assaf Dauber (Tel-Aviv University)
Meir Feder (Tel-Aviv University)
Tomer Koren (Tel Aviv University & Google)
Roi Livni (Tel Aviv University)
More from the Same Authors
-
2021 Spotlight: Littlestone Classes are Privately Online Learnable »
Noah Golowich · Roi Livni -
2022 Poster: Benign Underfitting of Stochastic Gradient Descent »
Tomer Koren · Roi Livni · Yishay Mansour · Uri Sherman -
2022 Poster: Rate-Optimal Online Convex Optimization in Adaptive Linear Control »
Asaf Benjamin Cassel · Alon Peled-Cohen · Tomer Koren -
2022 Poster: Better Best of Both Worlds Bounds for Bandits with Switching Costs »
Idan Amir · Guy Azov · Tomer Koren · Roi Livni -
2021 Poster: Algorithmic Instabilities of Accelerated Gradient Descent »
Amit Attia · Tomer Koren -
2021 Oral: Optimal Rates for Random Order Online Optimization »
Uri Sherman · Tomer Koren · Yishay Mansour -
2021 Poster: Towards Best-of-All-Worlds Online Learning with Feedback Graphs »
Liad Erez · Tomer Koren -
2021 Poster: Never Go Full Batch (in Stochastic Convex Optimization) »
Idan Amir · Yair Carmon · Tomer Koren · Roi Livni -
2021 Poster: Littlestone Classes are Privately Online Learnable »
Noah Golowich · Roi Livni -
2021 Poster: Optimal Rates for Random Order Online Optimization »
Uri Sherman · Tomer Koren · Yishay Mansour -
2021 Poster: Asynchronous Stochastic Optimization Robust to Arbitrary Delays »
Alon Cohen · Amit Daniely · Yoel Drori · Tomer Koren · Mariano Schain -
2021 Poster: Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection »
Koby Bibas · Meir Feder · Tal Hassner -
2020 Poster: Bandit Linear Control »
Asaf Benjamin Cassel · Tomer Koren -
2020 Spotlight: Bandit Linear Control »
Asaf Benjamin Cassel · Tomer Koren -
2020 Poster: Prediction with Corrupted Expert Advice »
Idan Amir · Idan Attias · Tomer Koren · Yishay Mansour · Roi Livni -
2020 Poster: Stochastic Optimization with Laggard Data Pipelines »
Naman Agarwal · Rohan Anil · Tomer Koren · Kunal Talwar · Cyril Zhang -
2020 Spotlight: Prediction with Corrupted Expert Advice »
Idan Amir · Idan Attias · Tomer Koren · Yishay Mansour · Roi Livni -
2019 : Private Stochastic Convex Optimization: Optimal Rates in Linear Time »
Vitaly Feldman · Tomer Koren · Kunal Talwar -
2019 Poster: Memory Efficient Adaptive Optimization »
Rohan Anil · Vineet Gupta · Tomer Koren · Yoram Singer -
2019 Poster: Robust Bi-Tempered Logistic Loss Based on Bregman Divergences »
Ehsan Amid · Manfred K. Warmuth · Rohan Anil · Tomer Koren -
2017 Poster: Affine-Invariant Online Optimization and the Low-rank Experts Problem »
Tomer Koren · Roi Livni -
2017 Poster: Multi-Armed Bandits with Metric Movement Costs »
Tomer Koren · Roi Livni · Yishay Mansour