Timezone: »
Generalization analyses of deep learning typically assume that the training converges to a fixed point. But, recent results indicate that in practice, the weights of deep neural networks optimized with stochastic gradient descent often oscillate indefinitely. To reduce this discrepancy between theory and practice, this paper focuses on the generalization of neural networks whose training dynamics do not necessarily converge to fixed points. Our main contribution is to propose a notion of statistical algorithmic stability (SAS) that extends classical algorithmic stability to non-convergent algorithms and to study its connection to generalization. This ergodic-theoretic approach leads to new insights when compared to the traditional optimization and learning theory perspectives. We prove that the stability of the time-asymptotic behavior of a learning algorithm relates to its generalization and empirically demonstrate how loss dynamics can provide clues to generalization performance. Our findings provide evidence that networks that ``train stably generalize better'' even when the training continues indefinitely and the weights do not converge.
Author Information
Nisha Chandramoorthy (Georgia Institute of Technology)
Andreas Loukas (Prescient Design, gRED, Roche)
Khashayar Gatmiry (Massachusetts Institute of Technology)
Stefanie Jegelka (MIT)
More from the Same Authors
-
2022 : A Pareto-optimal compositional energy-based model for sampling and optimization of protein sequences »
Nataša Tagasovska · Nathan Frey · Andreas Loukas · Isidro Hotzel · Julien Lafrance-Vanasse · Ryan Kelly · Yan Wu · Arvind Rajpal · Richard Bonneau · Kyunghyun Cho · Stephen Ra · Vladimir Gligorijevic -
2023 Poster: Projection-Free Online Convex Optimization via Efficient Newton Iterations »
Khashayar Gatmiry · Zak Mhammedi -
2023 Poster: Limits, approximation and size transferability for GNNs on sparse graphs via graphops »
Thien Le · Stefanie Jegelka -
2023 Poster: Expressive Sign Equivariant Networks for Spectral Geometric Learning »
Derek Lim · Joshua Robinson · Stefanie Jegelka · Haggai Maron -
2023 Poster: AbDiffuser: full-atom generation of in-vitro functioning antibodies »
Karolis Martinkus · Jan Ludwiczak · WEI-CHING LIANG · Julien Lafrance-Vanasse · Isidro Hotzel · Arvind Rajpal · Yan Wu · Kyunghyun Cho · Richard Bonneau · Vladimir Gligorijevic · Andreas Loukas -
2023 Poster: The Exact Sample Complexity Gain from Invariances for Kernel Regression »
Behrooz Tahmasebi · Stefanie Jegelka -
2023 Poster: What is the Inductive Bias of Flatness Regularization? A Study of Deep Matrix Factorization Models »
Khashayar Gatmiry · Zhiyuan Li · Tengyu Ma · Sashank Reddi · Stefanie Jegelka · Ching-Yao Chuang -
2023 Workshop: Heavy Tails in ML: Structure, Stability, Dynamics »
Mert Gurbuzbalaban · Stefanie Jegelka · Michael Mahoney · Umut Simsekli -
2022 : Panel »
Roman Garnett · José Miguel Hernández-Lobato · Eytan Bakshy · Syrine Belakaria · Stefanie Jegelka -
2022 Workshop: New Frontiers in Graph Learning »
Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka -
2022 Poster: Tree Mover's Distance: Bridging Graph Metrics and Stability of Graph Neural Networks »
Ching-Yao Chuang · Stefanie Jegelka -
2022 Poster: Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions »
Nikolaos Karalias · Joshua Robinson · Andreas Loukas · Stefanie Jegelka -
2021 Poster: What training reveals about neural network complexity »
Andreas Loukas · Marinos Poiitis · Stefanie Jegelka -
2021 Poster: SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning »
Mattia Atzeni · Jasmina Bogojeska · Andreas Loukas -
2021 Poster: Partition and Code: learning how to compress graphs »
Giorgos Bouritsas · Andreas Loukas · Nikolaos Karalias · Michael Bronstein -
2020 Poster: Adaptive Sampling for Stochastic Risk-Averse Learning »
Sebastian Curi · Kfir Y. Levy · Stefanie Jegelka · Andreas Krause -
2020 Poster: How hard is to distinguish graphs with graph neural networks? »
Andreas Loukas -
2020 Poster: Building powerful and equivariant graph neural networks with structural message-passing »
Clément Vignac · Andreas Loukas · Pascal Frossard -
2020 Poster: Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs »
Nikolaos Karalias · Andreas Loukas -
2020 Oral: Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs »
Nikolaos Karalias · Andreas Loukas -
2020 Poster: Debiased Contrastive Learning »
Ching-Yao Chuang · Joshua Robinson · Yen-Chen Lin · Antonio Torralba · Stefanie Jegelka -
2020 Spotlight: Debiased Contrastive Learning »
Ching-Yao Chuang · Joshua Robinson · Yen-Chen Lin · Antonio Torralba · Stefanie Jegelka -
2019 Workshop: Graph Representation Learning »
Will Hamilton · Rianne van den Berg · Michael Bronstein · Stefanie Jegelka · Thomas Kipf · Jure Leskovec · Renjie Liao · Yizhou Sun · Petar Veličković