Timezone: »
Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess \emph{fractal structures}, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal's \emph{intrinsic dimension}, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (\eg, for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the 'persistent homology dimension' (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network's intrinsic dimension in a variety of settings, which is predictive of the generalization error.
Author Information
Tolga Birdal (Stanford University)
Aaron Lou (Cornell University)
Leonidas Guibas (stanford.edu)
Umut Simsekli (Inria Paris / ENS)
More from the Same Authors
-
2021 Spotlight: Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms »
Alexander Camuto · George Deligiannidis · Murat Erdogdu · Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2022 : Breaking the Symmetry: Resolving Symmetry Ambiguities in Equivariant Neural Networks »
Sidhika Balachandar · Adrien Poulenard · Congyue Deng · Leonidas Guibas -
2022 Poster: NeuForm: Adaptive Overfitting for Neural Shape Editing »
Connor Lin · Niloy Mitra · Gordon Wetzstein · Leonidas Guibas · Paul Guerrero -
2022 Poster: Object Scene Representation Transformer »
Mehdi S. M. Sajjadi · Daniel Duckworth · Aravindh Mahendran · Sjoerd van Steenkiste · Filip Pavetic · Mario Lucic · Leonidas Guibas · Klaus Greff · Thomas Kipf -
2021 Poster: Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks »
Melih Barsbey · Milad Sefidgaran · Murat Erdogdu · Gaël Richard · Umut Simsekli -
2021 Poster: Leveraging SE(3) Equivariance for Self-supervised Category-Level Object Pose Estimation from Point Clouds »
Xiaolong Li · Yijia Weng · Li Yi · Leonidas Guibas · A. Abbott · Shuran Song · He Wang -
2021 Poster: Equivariant Manifold Flows »
Isay Katsman · Aaron Lou · Derek Lim · Qingxuan Jiang · Ser Nam Lim · Christopher De Sa -
2021 Poster: Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance »
Hongjian Wang · Mert Gurbuzbalaban · Lingjiong Zhu · Umut Simsekli · Murat Erdogdu -
2021 Poster: SketchGen: Generating Constrained CAD Sketches »
Wamiq Para · Shariq Bhat · Paul Guerrero · Tom Kelly · Niloy Mitra · Leonidas Guibas · Peter Wonka -
2021 Poster: Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections »
Kimia Nadjahi · Alain Durmus · Pierre E Jacob · Roland Badeau · Umut Simsekli -
2021 Poster: Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms »
Alexander Camuto · George Deligiannidis · Murat Erdogdu · Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2020 : QA: Leonidas J. Guibas »
Leonidas Guibas -
2020 : Invited Talk: Leonidas J. Guibas »
Leonidas Guibas -
2020 : Deep Riemannian Manifold Learning »
Aaron Lou · Maximilian Nickel · Brandon Amos -
2020 Poster: Generative 3D Part Assembly via Dynamic Graph Learning »
jialei huang · Guanqi Zhan · Qingnan Fan · Kaichun Mo · Lin Shao · Baoquan Chen · Leonidas Guibas · Hao Dong -
2020 Poster: CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations »
Davis Rempe · Tolga Birdal · Yongheng Zhao · Zan Gojcic · Srinath Sridhar · Leonidas Guibas -
2020 Poster: Neural Manifold Ordinary Differential Equations »
Aaron Lou · Derek Lim · Isay Katsman · Leo Huang · Qingxuan Jiang · Ser Nam Lim · Christopher De Sa -
2020 Poster: ShapeFlow: Learnable Deformation Flows Among 3D Shapes »
Chiyu Jiang · Jingwei Huang · Andrea Tagliasacchi · Leonidas Guibas -
2020 Spotlight: ShapeFlow: Learnable Deformation Flows Among 3D Shapes »
Chiyu Jiang · Jingwei Huang · Andrea Tagliasacchi · Leonidas Guibas -
2020 Spotlight: CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations »
Davis Rempe · Tolga Birdal · Yongheng Zhao · Zan Gojcic · Srinath Sridhar · Leonidas Guibas -
2019 Poster: Multiview Aggregation for Learning Category-Specific Shape Reconstruction »
Srinath Sridhar · Davis Rempe · Julien Valentin · Bouaziz Sofien · Leonidas Guibas -
2019 Poster: A Condition Number for Joint Optimization of Cycle-Consistent Networks »
Leonidas Guibas · Qixing Huang · Zhenxiao Liang -
2019 Spotlight: A Condition Number for Joint Optimization of Cycle-Consistent Networks »
Leonidas Guibas · Qixing Huang · Zhenxiao Liang -
2018 Poster: Deep Functional Dictionaries: Learning Consistent Semantic Structures on 3D Models from Functions »
Minhyuk Sung · Hao Su · Ronald Yu · Leonidas Guibas -
2017 Poster: Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding »
Mainak Jas · Tom Dupré la Tour · Umut Simsekli · Alexandre Gramfort -
2017 Poster: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space »
Charles Ruizhongtai Qi · Li Yi · Hao Su · Leonidas Guibas -
2016 Poster: FPNN: Field Probing Neural Networks for 3D Data »
Yangyan Li · Soeren Pirk · Hao Su · Charles R Qi · Leonidas Guibas -
2015 Poster: Deep Knowledge Tracing »
Chris Piech · Jonathan Bassen · Jonathan Huang · Surya Ganguli · Mehran Sahami · Leonidas Guibas · Jascha Sohl-Dickstein -
2013 Poster: Wavelets on Graphs via Deep Learning »
Raif Rustamov · Leonidas Guibas -
2013 Demonstration: Codewebs: a Pedagogical Search Engine for Code Submissions to a MOOC »
Jonathan Huang · Chris Piech · Andy Nguyen · Leonidas Guibas -
2007 Oral: Efficient Inference forDistributions on Permutations »
Jonathan Huang · Carlos Guestrin · Leonidas Guibas -
2007 Poster: Efficient Inference forDistributions on Permutations »
Jonathan Huang · Carlos Guestrin · Leonidas Guibas