Timezone: »
Poster
A general approximation lower bound in $L^p$ norm, with applications to feed-forward neural networks
El Mehdi Achour · Armand Foucault · Sébastien Gerchinovitz · François Malgouyres
We study the fundamental limits to the expressive power of neural networks. Given two sets $F$, $G$ of real-valued functions, we first prove a general lower bound on how well functions in $F$ can be approximated in $L^p(\mu)$ norm by functions in $G$, for any $p \geq 1$ and any probability measure $\mu$. The lower bound depends on the packing number of $F$, the range of $F$, and the fat-shattering dimension of $G$. We then instantiate this bound to the case where $G$ corresponds to a piecewise-polynomial feedforward neural network, and describe in details the application to two sets $F$: Hölder balls and multivariate monotonic functions. Beside matching (known or new) upper bounds up to log factors, our lower bounds shed some light on the similarities or differences between approximation in $L^p$ norm or in sup norm, solving an open question by DeVore et al. (2021). Our proof strategy differs from the sup norm case and uses a key probability result of Mendelson (2002).
Author Information
El Mehdi Achour (Universite Paul Sabatier Toulouse III)
Armand Foucault (Université de Toulouse)
Sébastien Gerchinovitz (IRT Saint Exupéry)
François Malgouyres (Université Toulouse Paul Sabatier Institut de Mathématiques de Toulouse)
More from the Same Authors
-
2022 Poster: Local Identifiability of Deep ReLU Neural Networks: the Theory »
Joachim Bona-Pellissier · François Malgouyres · Francois Bachoc -
2021 Poster: Instance-Dependent Bounds for Zeroth-order Lipschitz Optimization with Error Certificates »
Francois Bachoc · Tom Cesari · Sébastien Gerchinovitz -
2021 Poster: Numerical influence of ReLU’(0) on backpropagation »
David Bertoin · Jérôme Bolte · Sébastien Gerchinovitz · Edouard Pauwels -
2016 Poster: Refined Lower Bounds for Adversarial Bandits »
Sébastien Gerchinovitz · Tor Lattimore