Timezone: »

 
Spotlight
ResNet with one-neuron hidden layers is a Universal Approximator
Hongzhou Lin · Stefanie Jegelka

Thu Dec 06 01:45 PM -- 01:50 PM (PST) @ Room 220 E

We demonstrate that a very deep ResNet with stacked modules that have one neuron per hidden layer and ReLU activation functions can uniformly approximate any Lebesgue integrable function in d dimensions, i.e. \ell_1(R^d). Due to the identity mapping inherent to ResNets, our network has alternating layers of dimension one and d. This stands in sharp contrast to fully connected networks, which are not universal approximators if their width is the input dimension d [21,11]. Hence, our result implies an increase in representational power for narrow deep networks by the ResNet architecture.

Author Information

Hongzhou Lin (MIT)
Stefanie Jegelka (MIT)

Stefanie Jegelka is an X-Consortium Career Development Assistant Professor in the Department of EECS at MIT. She is a member of the Computer Science and AI Lab (CSAIL), the Center for Statistics and an affiliate of the Institute for Data, Systems and Society and the Operations Research Center. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, the German Pattern Recognition Award and a Best Paper Award at the International Conference for Machine Learning (ICML). Her research interests span the theory and practice of algorithmic machine learning.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors