Skip to yearly menu bar Skip to main content


Poster

Bounds for the smallest eigenvalue of the NTK for arbitrary spherical data of arbitrary dimension

Kedar Karhadkar · Michael Murray · Guido Montufar

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Bounds on the smallest eigenvalue of the neural tangent kernel (NTK) are a key ingredient in the analysis of neural network optimization and memorization. However, existing results are limited to a high-dimensional data setting where the input dimension $d_0$ scales at least logarithmically in the number of samples $n$. In this work we provide results which relax this requirement, allowing us to handle even the classical statistical setting in which $d_0$ is held constant versus $n$. In particular, we show for a randomly initialized ReLU network that with high probability the smallest eigenvalue of the corresponding NTK will be $\tilde{\Omega}(n^{-c/(d_0-1)})$ for a constant $c > 0$ which depends on whether the network is deep or shallow. Furthermore, we show for shallow ReLU networks that $d_0 = \Omega(\log(n))$ is both sufficient and necessary for the smallest eigenvalue of the NTK to be $\tilde{\Omega}(1)$ with high probability. We prove our results through a novel application of the hemisphere transform.

Live content is unavailable. Log in and register to view live content