Timezone: »
We aim to understand grokking, a phenomenon where models generalize long after overfitting their training set. We present both a microscopic analysis anchored by an effective theory and a macroscopic analysis of phase diagrams describing learning performance across hyperparameters. We find that generalization originates from structured representations, whose training dynamics and dependence on training set size can be predicted by our effective theory (in a toy setting). We observe empirically the presence of four learning phases: comprehension, grokking, memorization, and confusion. We find representation learning to occur only in a "Goldilocks zone" (including comprehension and grokking) between memorization and confusion. Compared to the comprehension phase, the grokking phase stays closer to the memorization phase, leading to delayed generalization. The Goldilocks phase is reminiscent of "intelligence from starvation" in Darwinian evolution, where resource limitations drive discovery of more efficient solutions. This study not only provides intuitive explanations of the origin of grokking, but also highlights the usefulness of physics-inspired tools, e.g., effective theories and phase diagrams, for understanding deep learning.
Author Information
Ziming Liu (MIT)
Ouail Kitouni (MIT)
Niklas S Nolte (MIT)
Eric Michaud (University of California, Berkeley)
Max Tegmark (MIT)
Max Tegmark is a professor doing physics and AI research at MIT, and advocates for positive use of technology as president of the Future of Life Institute. He is the author of over 250 publications as well as the New York Times bestsellers “Life 3.0: Being Human in the Age of Artificial Intelligence” and "Our Mathematical Universe: My Quest for the Ultimate Nature of Reality". His AI research focuses on intelligible intelligence. His work with the Sloan Digital Sky Survey on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.”
Mike Williams (MIT)
More from the Same Authors
-
2021 : Physics-Augmented Learning: A New Paradigm Beyond Physics-Informed Learning »
Ziming Liu · Yuanqi Du · Yunyue Chen · Max Tegmark -
2021 : Robust and Provably Monotonic Networks »
Niklas S Nolte · Ouail Kitouni · Mike Williams -
2022 : Finding NEEMo: Geometric Fitting using Neural Estimation of the Energy Mover’s Distance »
Ouail Kitouni · Mike Williams · Niklas S Nolte -
2023 : Transformers for Scattering Amplitudes »
Garrett Merz · Francois Charton · Tianji Cai · Kyle Cranmer · Lance Dixon · Niklas Nolte · Matthias Wilhelm -
2023 : Optimized Dry Cooling for Solar Power Plants »
Hansley Narasiah · Ouail Kitouni · Andrea Scorsoglio · Bernd Sturdza · Shawn Hatcher · Dolores Garcia · Matt Kusner -
2023 : Grokking as Simplification: A Nonlinear Complexity Perspective »
Ziming Liu · Ziqian Zhong · Max Tegmark -
2023 : Growing Brains: Co-emergence of Anatomical and Functional Modularity in Recurrent Neural Networks »
Ziming Liu · Mikail Khona · Ila Fiete · Max Tegmark -
2023 : Grokking as Simplification: A Nonlinear Complexity Perspective »
Ziming Liu · Ziqian Zhong · Max Tegmark -
2023 : Growing Brains: Co-emergence of Anatomical and Functional Modularity in Recurrent Neural Networks »
Ziming Liu · Mikail Khona · Ila Fiete · Max Tegmark -
2023 : The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets »
Samuel Marks · Max Tegmark -
2023 Workshop: AI for Science: from Theory to Practice »
Yuanqi Du · Max Welling · Yoshua Bengio · Marinka Zitnik · Carla Gomes · Jure Leskovec · Maria Brbic · Wenhao Gao · Kexin Huang · Ziming Liu · Rocío Mercado · Miles Cranmer · Shengchao Liu · Lijing Wang -
2023 Poster: Restart Sampling for Improving Generative Processes »
Yilun Xu · Mingyang Deng · Xiang Cheng · Yonglong Tian · Ziming Liu · Tommi Jaakkola -
2023 Poster: The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks »
Ziqian Zhong · Ziming Liu · Max Tegmark · Jacob Andreas -
2023 Oral: The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks »
Ziqian Zhong · Ziming Liu · Max Tegmark · Jacob Andreas -
2023 Poster: The Quantization Model of Neural Scaling »
Eric Michaud · Ziming Liu · Uzay Girit · Max Tegmark -
2022 Spotlight: Poisson Flow Generative Models »
Yilun Xu · Ziming Liu · Max Tegmark · Tommi Jaakkola -
2022 Spotlight: Lightning Talks 6B-1 »
Yushun Zhang · Duc Nguyen · Jiancong Xiao · Wei Jiang · Yaohua Wang · Yilun Xu · Zhen LI · Anderson Ye Zhang · Ziming Liu · Fangyi Zhang · Gilles Stoltz · Congliang Chen · Gang Li · Yanbo Fan · Ruoyu Sun · Naichen Shi · Yibo Wang · Ming Lin · Max Tegmark · Lijun Zhang · Jue Wang · Ruoyu Sun · Tommi Jaakkola · Senzhang Wang · Zhi-Quan Luo · Xiuyu Sun · Zhi-Quan Luo · Tianbao Yang · Rong Jin -
2022 Panel: Panel 1C-3: Towards Understanding Grokking:… & Approximation with CNNs… »
Ziming Liu · GUOHAO SHEN -
2022 Poster: Poisson Flow Generative Models »
Yilun Xu · Ziming Liu · Max Tegmark · Tommi Jaakkola -
2021 Workshop: AI for Science: Mind the Gaps »
Payal Chandak · Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Gabriel Spadon · Max Tegmark · Hanchen Wang · Adrian Weller · Max Welling · Marinka Zitnik -
2020 Poster: AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity »
Silviu-Marian Udrescu · Andrew Tan · Jiahai Feng · Orisvaldo Neto · Tailin Wu · Max Tegmark -
2020 Oral: AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity »
Silviu-Marian Udrescu · Andrew Tan · Jiahai Feng · Orisvaldo Neto · Tailin Wu · Max Tegmark -
2015 : Machine Learning in HEP »
Mike Williams