Timezone: »
Poster
Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterization
Simone Bombari · Mohammad Hossein Amani · Marco Mondelli
The Neural Tangent Kernel (NTK) has emerged as a powerful tool to provide memorization, optimization and generalization guarantees in deep neural networks. A line of work has studied the NTK spectrum for two-layer and deep networks with at least a layer with $\Omega(N)$ neurons, $N$ being the number of training samples. Furthermore, there is increasing evidence suggesting that deep networks with sub-linear layer widths are powerful memorizers and optimizers, as long as the number of parameters exceeds the number of samples. Thus, a natural open question is whether the NTK is well conditioned in such a challenging sub-linear setup. In this paper, we answer this question in the affirmative. Our key technical contribution is a lower bound on the smallest NTK eigenvalue for deep networks with the minimum possible over-parameterization: up to logarithmic factors, the number of parameters is $\Omega(N)$ and, hence, the number of neurons is as little as $\Omega(\sqrt{N})$. To showcase the applicability of our NTK bounds, we provide two results concerning memorization capacity and optimization guarantees for gradient descent training.
Author Information
Simone Bombari (IST Austria)
Mohammad Hossein Amani (Institute of Science and Technology Austria)
Marco Mondelli (IST Austria)
More from the Same Authors
-
2022 : Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence »
Diyuan Wu · Vyacheslav Kungurtsev · Marco Mondelli -
2023 Poster: Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model »
Peter Súkeník · Marco Mondelli · Christoph Lampert -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2022 Poster: The price of ignorance: how much does it cost to forget noise structure in low-rank matrix estimation? »
Jean Barbier · TianQi Hou · Marco Mondelli · Manuel Saenz -
2021 Poster: When Are Solutions Connected in Deep Networks? »
Quynh Nguyen · Pierre Bréchet · Marco Mondelli -
2021 Poster: PCA Initialization for Approximate Message Passing in Rotationally Invariant Models »
Marco Mondelli · Ramji Venkataramanan -
2020 Poster: Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology »
Quynh Nguyen · Marco Mondelli