Timezone: »
This work explores the Benevolent Training Hypothesis (BTH) which argues that the complexity of the function a deep neural network (NN) is learning can be deduced by its training dynamics. Our analysis provides evidence for BTH by relating the NN's Lipschitz constant at different regions of the input space with the behavior of the stochastic training procedure. We first observe that the Lipschitz constant close to the training data affects various aspects of the parameter trajectory, with more complex networks having a longer trajectory, bigger variance, and often veering further from their initialization. We then show that NNs whose 1st layer bias is trained more steadily (i.e., slowly and with little variation) have bounded complexity even in regions of the input space that are far from any training point. Finally, we find that steady training with Dropout implies a training- and data-dependent generalization bound that grows poly-logarithmically with the number of parameters. Overall, our results support the intuition that good training behavior can be a useful bias towards good generalization.
Author Information
Andreas Loukas (EPFL, MIT)
Researcher fascinated by graphs and machine learning.
Marinos Poiitis (Aristotle University of Thessaloniki)
Stefanie Jegelka (MIT)
Stefanie Jegelka is an X-Consortium Career Development Assistant Professor in the Department of EECS at MIT. She is a member of the Computer Science and AI Lab (CSAIL), the Center for Statistics and an affiliate of the Institute for Data, Systems and Society and the Operations Research Center. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, the German Pattern Recognition Award and a Best Paper Award at the International Conference for Machine Learning (ICML). Her research interests span the theory and practice of algorithmic machine learning.
More from the Same Authors
-
2021 Spotlight: Measuring Generalization with Optimal Transport »
Ching-Yao Chuang · Youssef Mroueh · Kristjan Greenewald · Antonio Torralba · Stefanie Jegelka -
2021 : Invited talk 1 »
Stefanie Jegelka -
2021 Poster: Scaling up Continuous-Time Markov Chains Helps Resolve Underspecification »
Alkis Gotovos · Rebekka Burkholz · John Quackenbush · Stefanie Jegelka -
2021 Poster: Can contrastive learning avoid shortcut solutions? »
Joshua Robinson · Li Sun · Ke Yu · Kayhan Batmanghelich · Stefanie Jegelka · Suvrit Sra -
2021 Poster: SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning »
Mattia Atzeni · Jasmina Bogojeska · Andreas Loukas -
2021 Poster: Partition and Code: learning how to compress graphs »
Giorgos Bouritsas · Andreas Loukas · Nikolaos Karalias · Michael Bronstein -
2021 Poster: Measuring Generalization with Optimal Transport »
Ching-Yao Chuang · Youssef Mroueh · Kristjan Greenewald · Antonio Torralba · Stefanie Jegelka -
2020 Poster: How hard is to distinguish graphs with graph neural networks? »
Andreas Loukas -
2020 Poster: Building powerful and equivariant graph neural networks with structural message-passing »
Clément Vignac · Andreas Loukas · Pascal Frossard -
2020 Poster: Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs »
Nikolaos Karalias · Andreas Loukas -
2020 Poster: Testing Determinantal Point Processes »
Khashayar Gatmiry · Maryam Aliakbarpour · Stefanie Jegelka -
2020 Spotlight: Testing Determinantal Point Processes »
Khashayar Gatmiry · Maryam Aliakbarpour · Stefanie Jegelka -
2020 Oral: Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs »
Nikolaos Karalias · Andreas Loukas -
2020 Poster: IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method »
Yossi Arjevani · Joan Bruna · Bugra Can · Mert Gurbuzbalaban · Stefanie Jegelka · Hongzhou Lin -
2020 Spotlight: IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method »
Yossi Arjevani · Joan Bruna · Bugra Can · Mert Gurbuzbalaban · Stefanie Jegelka · Hongzhou Lin -
2019 : Invited Talk - Stefanie Jegelka - Set Representations in Graph Neural Networks and Reasoning »
Stefanie Jegelka -
2019 : Poster Session »
Lili Yu · Aleksei Kroshnin · Alex Delalande · Andrew Carr · Anthony Tompkins · Aram-Alexandre Pooladian · Arnaud Robert · Ashok Vardhan Makkuva · Aude Genevay · Bangjie Liu · Bo Zeng · Charlie Frogner · Elsa Cazelles · Esteban G Tabak · Fabio Ramos · François-Pierre PATY · Georgios Balikas · Giulio Trigila · Hao Wang · Hinrich Mahler · Jared Nielsen · Karim Lounici · Kyle Swanson · Mukul Bhutani · Pierre Bréchet · Piotr Indyk · samuel cohen · Stefanie Jegelka · Tao Wu · Thibault Sejourne · Tudor Manole · Wenjun Zhao · Wenlin Wang · Wenqi Wang · Yonatan Dukler · Zihao Wang · Chaosheng Dong -
2019 : Stefanie Jegelka »
Stefanie Jegelka -
2019 Poster: Distributionally Robust Optimization and Generalization in Kernel Methods »
Matt Staib · Stefanie Jegelka -
2019 Poster: Flexible Modeling of Diversity with Strongly Log-Concave Distributions »
Joshua Robinson · Suvrit Sra · Stefanie Jegelka -
2018 Poster: ResNet with one-neuron hidden layers is a Universal Approximator »
Hongzhou Lin · Stefanie Jegelka -
2018 Spotlight: ResNet with one-neuron hidden layers is a Universal Approximator »
Hongzhou Lin · Stefanie Jegelka -
2018 Poster: Provable Variational Inference for Constrained Log-Submodular Models »
Josip Djolonga · Stefanie Jegelka · Andreas Krause -
2018 Poster: Adversarially Robust Optimization with Gaussian Processes »
Ilija Bogunovic · Jonathan Scarlett · Stefanie Jegelka · Volkan Cevher -
2018 Spotlight: Adversarially Robust Optimization with Gaussian Processes »
Ilija Bogunovic · Jonathan Scarlett · Stefanie Jegelka · Volkan Cevher -
2018 Poster: Exponentiated Strongly Rayleigh Distributions »
Zelda Mariet · Suvrit Sra · Stefanie Jegelka -
2018 Tutorial: Negative Dependence, Stable Polynomials, and All That »
Suvrit Sra · Stefanie Jegelka -
2017 : Invited talk: Scaling Bayesian Optimization in High Dimensions »
Stefanie Jegelka -
2017 Workshop: Discrete Structures in Machine Learning »
Yaron Singer · Jeff A Bilmes · Andreas Krause · Stefanie Jegelka · Amin Karbasi -
2017 Poster: Parallel Streaming Wasserstein Barycenters »
Matt Staib · Sebastian Claici · Justin Solomon · Stefanie Jegelka -
2017 Poster: Polynomial time algorithms for dual volume sampling »
Chengtao Li · Stefanie Jegelka · Suvrit Sra -
2016 : Submodular Optimization and Nonconvexity »
Stefanie Jegelka -
2016 Workshop: Nonconvex Optimization for Machine Learning: Theory and Practice »
Hossein Mobahi · Anima Anandkumar · Percy Liang · Stefanie Jegelka · Anna Choromanska -
2016 Poster: Fast Mixing Markov Chains for Strongly Rayleigh Measures, DPPs, and Constrained Sampling »
Chengtao Li · Suvrit Sra · Stefanie Jegelka -
2016 Poster: Cooperative Graphical Models »
Josip Djolonga · Stefanie Jegelka · Sebastian Tschiatschek · Andreas Krause -
2014 Workshop: Discrete Optimization in Machine Learning »
Jeffrey A Bilmes · Andreas Krause · Stefanie Jegelka · S Thomas McCormick · Sebastian Nowozin · Yaron Singer · Dhruv Batra · Volkan Cevher -
2014 Poster: Parallel Double Greedy Submodular Maximization »
Xinghao Pan · Stefanie Jegelka · Joseph Gonzalez · Joseph K Bradley · Michael Jordan -
2014 Poster: Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets »
Adarsh Prasad · Stefanie Jegelka · Dhruv Batra -
2014 Spotlight: Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets »
Adarsh Prasad · Stefanie Jegelka · Dhruv Batra -
2014 Poster: On the Convergence Rate of Decomposable Submodular Function Minimization »
Robert Nishihara · Stefanie Jegelka · Michael Jordan -
2014 Poster: Weakly-supervised Discovery of Visual Pattern Configurations »
Hyun Oh Song · Yong Jae Lee · Stefanie Jegelka · Trevor Darrell -
2013 Workshop: Discrete Optimization in Machine Learning: Connecting Theory and Practice »
Stefanie Jegelka · Andreas Krause · Pradeep Ravikumar · Kazuo Murota · Jeffrey A Bilmes · Yisong Yue · Michael Jordan -
2013 Poster: Optimistic Concurrency Control for Distributed Unsupervised Learning »
Xinghao Pan · Joseph Gonzalez · Stefanie Jegelka · Tamara Broderick · Michael Jordan -
2013 Poster: Reflection methods for user-friendly submodular optimization »
Stefanie Jegelka · Francis Bach · Suvrit Sra -
2013 Poster: Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions »
Rishabh K Iyer · Stefanie Jegelka · Jeffrey A Bilmes -
2012 Workshop: Discrete Optimization in Machine Learning (DISCML): Structure and Scalability »
Stefanie Jegelka · Andreas Krause · Jeffrey A Bilmes · Pradeep Ravikumar -
2011 Poster: Fast approximate submodular minimization »
Stefanie Jegelka · Hui Lin · Jeffrey A Bilmes -
2010 Workshop: Discrete Optimization in Machine Learning: Structures, Algorithms and Applications »
Andreas Krause · Pradeep Ravikumar · Jeffrey A Bilmes · Stefanie Jegelka