Timezone: »
Stochastic gradient descent with a large initial learning rate is widely used for training modern neural net architectures. Although a small initial learning rate allows for faster training and better test performance initially, the large learning rate achieves better generalization soon after the learning rate is annealed. Towards explaining this phenomenon, we devise a setting in which we can prove that a two layer network trained with large initial learning rate and annealing provably generalizes better than the same network trained with a small learning rate from the start. The key insight in our analysis is that the order of learning different types of patterns is crucial: because the small learning rate model first memorizes easy-to-generalize, hard-to-fit patterns, it generalizes worse on hard-to-generalize, easier-to-fit patterns than its large learning rate counterpart. This concept translates to a larger-scale setting: we demonstrate that one can add a small patch to CIFAR-10 images that is immediately memorizable by a model with small initial learning rate, but ignored by the model with large learning rate until after annealing. Our experiments show that this causes the small learning rate model's accuracy on unmodified images to suffer, as it relies too much on the patch early on.
Author Information
Yuanzhi Li (Princeton)
Colin Wei (Stanford University)
Tengyu Ma (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Spotlight: Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks »
Tue. Dec 10th 06:35 -- 06:40 PM Room West Exhibition Hall C + B3
More from the Same Authors
-
2021 Spotlight: Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning »
Colin Wei · Sang Michael Xie · Tengyu Ma -
2021 : Sharp Bounds for FedAvg (Local SGD) »
Margalit Glasgow · Honglin Yuan · Tengyu Ma -
2022 : How Sharpness-Aware Minimization Minimizes Sharpness? »
Kaiyue Wen · Tengyu Ma · Zhiyuan Li -
2022 : How Sharpness-Aware Minimization Minimizes Sharpness? »
Kaiyue Wen · Tengyu Ma · Zhiyuan Li -
2022 : First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains »
Kefan Dong · Tengyu Ma -
2023 Poster: Data Selection for Language Models via Importance Resampling »
Sang Michael Xie · Shibani Santurkar · Tengyu Ma · Percy Liang -
2023 Poster: DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining »
Sang Michael Xie · Hieu Pham · Xuanyi Dong · Nan Du · Hanxiao Liu · Yifeng Lu · Percy Liang · Quoc V Le · Tengyu Ma · Adams Wei Yu -
2023 Poster: Beyond NTK with Vanilla Gradient Descent: A Mean-field Analysis of Neural Networks with Polynomial Width, Samples, and Time »
Arvind Mahankali · Jeff Z. HaoChen · Kefan Dong · Margalit Glasgow · Tengyu Ma -
2023 Poster: Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization »
Kaiyue Wen · Tengyu Ma · Zhiyuan Li -
2023 Poster: What is the Inductive Bias of Flatness Regularization? A Study of Deep Matrix Factorization Models »
Khashayar Gatmiry · Zhiyuan Li · Tengyu Ma · Sashank Reddi · Stefanie Jegelka · Ching-Yao Chuang -
2023 Oral: Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization »
Kaiyue Wen · Tengyu Ma · Zhiyuan Li -
2023 Workshop: Mathematics of Modern Machine Learning (M3L) »
Aditi Raghunathan · Alex Damian · Bingbin Liu · Christina Baek · Kaifeng Lyu · Surbhi Goel · Tengyu Ma · Zhiyuan Li -
2022 : First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains »
Kefan Dong · Tengyu Ma -
2022 Poster: Statistically Meaningful Approximation: a Case Study on Approximating Turing Machines with Transformers »
Colin Wei · Yining Chen · Tengyu Ma -
2022 Poster: Iterative Feature Matching: Toward Provable Domain Generalization with Logarithmic Environments »
Yining Chen · Elan Rosenfeld · Mark Sellke · Tengyu Ma · Andrej Risteski -
2022 Poster: Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations »
Jeff Z. HaoChen · Colin Wei · Ananya Kumar · Tengyu Ma -
2021 : Invited talk 4 »
Tengyu Ma -
2021 : Contributed Talk 4: Sharp Bounds for FedAvg (Local SGD) »
Margalit Glasgow · Honglin Yuan · Tengyu Ma -
2021 Poster: Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning »
Colin Wei · Sang Michael Xie · Tengyu Ma -
2021 Oral: Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss »
Jeff Z. HaoChen · Colin Wei · Adrien Gaidon · Tengyu Ma -
2021 Poster: Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss »
Jeff Z. HaoChen · Colin Wei · Adrien Gaidon · Tengyu Ma -
2020 Poster: Federated Accelerated Stochastic Gradient Descent »
Honglin Yuan · Tengyu Ma -
2020 Poster: Self-training Avoids Using Spurious Features Under Domain Shift »
Yining Chen · Colin Wei · Ananya Kumar · Tengyu Ma -
2020 Poster: Beyond Lazy Training for Over-parameterized Tensor Decomposition »
Xiang Wang · Chenwei Wu · Jason Lee · Tengyu Ma · Rong Ge -
2020 Poster: Model-based Adversarial Meta-Reinforcement Learning »
Zichuan Lin · Garrett Thomas · Guangwen Yang · Tengyu Ma -
2020 Poster: MOPO: Model-based Offline Policy Optimization »
Tianhe Yu · Garrett Thomas · Lantao Yu · Stefano Ermon · James Zou · Sergey Levine · Chelsea Finn · Tengyu Ma -
2019 Poster: Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss »
Kaidi Cao · Colin Wei · Adrien Gaidon · Nikos Arechiga · Tengyu Ma -
2019 Poster: Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel »
Colin Wei · Jason Lee · Qiang Liu · Tengyu Ma -
2019 Spotlight: Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel »
Colin Wei · Jason Lee · Qiang Liu · Tengyu Ma -
2019 Poster: On the Convergence Rate of Training Recurrent Neural Networks »
Zeyuan Allen-Zhu · Yuanzhi Li · Zhao Song -
2019 Poster: What Can ResNet Learn Efficiently, Going Beyond Kernels? »
Zeyuan Allen-Zhu · Yuanzhi Li -
2019 Poster: Verified Uncertainty Calibration »
Ananya Kumar · Percy Liang · Tengyu Ma -
2019 Spotlight: Verified Uncertainty Calibration »
Ananya Kumar · Percy Liang · Tengyu Ma -
2019 Poster: Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers »
Zeyuan Allen-Zhu · Yuanzhi Li · Yingyu Liang -
2019 Poster: Complexity of Highly Parallel Non-Smooth Convex Optimization »
Sebastien Bubeck · Qijia Jiang · Yin-Tat Lee · Yuanzhi Li · Aaron Sidford -
2019 Spotlight: Complexity of Highly Parallel Non-Smooth Convex Optimization »
Sebastien Bubeck · Qijia Jiang · Yin-Tat Lee · Yuanzhi Li · Aaron Sidford -
2019 Poster: Can SGD Learn Recurrent Neural Networks with Provable Generalization? »
Zeyuan Allen-Zhu · Yuanzhi Li -
2019 Poster: Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation »
Colin Wei · Tengyu Ma -
2019 Spotlight: Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation »
Colin Wei · Tengyu Ma -
2018 Poster: Online Improper Learning with an Approximation Oracle »
Elad Hazan · Wei Hu · Yuanzhi Li · Zhiyuan Li -
2018 Poster: NEON2: Finding Local Minima via First-Order Oracles »
Zeyuan Allen-Zhu · Yuanzhi Li -
2018 Poster: Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data »
Yuanzhi Li · Yingyu Liang -
2018 Spotlight: Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data »
Yuanzhi Li · Yingyu Liang