Skip to yearly menu bar Skip to main content


Poster

On the Complexity of Learning Neural Networks

Le Song · Santosh Vempala · John Wilmes · Bo Xie

Pacific Ballroom #206

Keywords: [ Computational Complexity ] [ Efficient Training Methods ] [ Hardness of Learning and Approximations ] [ Regression ] [ Representation Learning ] [ Deep Learning ] [ Learning Theory ]


Abstract:

The stunning empirical successes of neural networks currently lack rigorous theoretical explanation. What form would such an explanation take, in the face of existing complexity-theoretic lower bounds? A first step might be to show that data generated by neural networks with a single hidden layer, smooth activation functions and benign input distributions can be learned efficiently. We demonstrate here a comprehensive lower bound ruling out this possibility: for a wide class of activation functions (including all currently used), and inputs drawn from any logconcave distribution, there is a family of one-hidden-layer functions whose output is a sum gate that are hard to learn in a precise sense: any statistical query algorithm (which includes all known variants of stochastic gradient descent with any loss function) needs an exponential number of queries even using tolerance inversely proportional to the input dimensionality. Moreover, this hard family of functions is realizable with a small (sublinear in dimension) number of activation units in the single hidden layer. The lower bound is also robust to small perturbations of the true weights. Systematic experiments illustrate a phase transition in the training error as predicted by the analysis.

Live content is unavailable. Log in and register to view live content