Timezone: »
The \emph{grokking phenomenon} reported by Power et al.~\cite{power2021grokking} refers to a regime where a long period of overfitting is followed by a seemingly sudden transition to perfect generalization. In this paper, we attempt to reveal the underpinnings of Grokking via empirical studies. Specifically, we uncover an optimization anomaly plaguing adaptive optimizers at extremely late stages of training, referred to as the \emph{Slingshot Mechanism}. A prominent artifact of the Slingshot Mechanism can be measured by the cyclic phase transitions between stable and unstable training regimes, and can be easily monitored by the cyclic behavior of the norm of the last layers weights. We empirically observe that without explicit regularization, Grokking as reported in \cite{power2021grokking} almost exclusively happens at the onset of \emph{Slingshots}, and is absent without it. While common and easily reproduced in more general settings, the Slingshot Mechanism does not follow from any known optimization theories that we are aware of, and can be easily overlooked without an in depth examination. Our work points to a surprising and useful inductive bias of adaptive gradient optimizers at late stages of training, calling for a revised theoretical analysis of their origin.
Author Information
Vimal Thilak (Apple)
Etai Littwin (Apple)
Shuangfei Zhai (Apple)
Omid Saremi (Apple)
Roni Paiss
Joshua Susskind (Apple Inc.)
I was an undergraduate in Cognitive Science at UCSD from 1995-2003 (with some breaks). Then I earned a PhD from UofT in machine learning and cognitive neuroscience, with Dr. Geoff Hinton and Dr. Adam Anderson. Following grad school I moved to UCSD for a post-doctoral position. Before coming to Apple I co-founded Emotient in 2012 and led the deep learning effort for facial expression and demographics recognition. Since joining Apple, I led the Face ID neural network team responsible for face recognition, and then started a machine learning research group within the hardware organization focused on fundamental ML technology.
More from the Same Authors
-
2021 : Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models »
Nitish Srivastava · Walter Talbott · Shuangfei Zhai · Joshua Susskind -
2023 Poster: Transformers learn through gradual rank increase »
Emmanuel Abbe · Samy Bengio · Enric Boix-Adsera · Etai Littwin · Joshua Susskind -
2023 Poster: PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model »
Yizhe Zhang · Jiatao Gu · Zhuofeng Wu · Shuangfei Zhai · Joshua Susskind · Navdeep Jaitly -
2022 Poster: GAUDI: A Neural Architect for Immersive 3D Scene Generation »
Miguel Angel Bautista · Pengsheng Guo · Samira Abnar · Walter Talbott · Alexander Toshev · Zhuoyuan Chen · Laurent Dinh · Shuangfei Zhai · Hanlin Goh · Daniel Ulbricht · Afshin Dehghan · Joshua Susskind -
2020 Poster: Collegial Ensembles »
Etai Littwin · Ben Myara · Sima Sabah · Joshua Susskind · Shuangfei Zhai · Oren Golan -
2020 Spotlight: Collegial Ensembles »
Etai Littwin · Ben Myara · Sima Sabah · Joshua Susskind · Shuangfei Zhai · Oren Golan -
2020 Poster: On Infinite-Width Hypernetworks »
Etai Littwin · Tomer Galanti · Lior Wolf · Greg Yang -
2019 Poster: Adversarial Fisher Vectors for Unsupervised Representation Learning »
Shuangfei Zhai · Walter Talbott · Carlos Guestrin · Joshua Susskind -
2019 Spotlight: Adversarial Fisher Vectors for Unsupervised Representation Learning »
Shuangfei Zhai · Walter Talbott · Carlos Guestrin · Joshua Susskind -
2018 Poster: Regularizing by the Variance of the Activations' Sample-Variances »
Etai Littwin · Lior Wolf