Timezone: »
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. For FNNs we considered (i) ReLU networks without normalization, (ii) batch normalization, (iii) layer normalization, (iv) weight normalization, (v) highway networks, (vi) residual networks. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep.
Author Information
Günter Klambauer (LIT AI Lab / University Linz)
Thomas Unterthiner (LIT AI Lab / University Linz)
Andreas Mayr (LIT AI Lab / University Linz)
Sepp Hochreiter (LIT AI Lab / University Linz)
Head of the LIT AI Lab and Professor of bioinformatics at the University of Linz. First to identify and analyze the vanishing gradient problem, the fundamental deep learning problem, in 1991. First author of the main paper on the now widely used LSTM RNNs. He implemented 'learning how to learn' (meta-learning) networks via LSTM RNNs and applied Deep Learning and RNNs to self-driving cars, sentiment analysis, reinforcement learning, bioinformatics, and medicine.
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Self-Normalizing Neural Networks »
Thu. Dec 7th 02:30 -- 06:30 AM Room Pacific Ballroom #134
More from the Same Authors
-
2021 : Modern Hopfield Networks for Return Decomposition for Delayed Rewards »
Michael Widrich · Markus Hofmarcher · Vihang Patil · Angela Bitto · Sepp Hochreiter -
2021 : Understanding the Effects of Dataset Composition on Offline Reinforcement Learning »
Kajetan Schweighofer · Markus Hofmarcher · Marius-Constantin Dinu · Philipp Renz · Angela Bitto · Vihang Patil · Sepp Hochreiter -
2020 Poster: Modern Hopfield Networks and Attention for Immune Repertoire Classification »
Michael Widrich · Bernhard Schäfl · Milena Pavlović · Hubert Ramsauer · Lukas Gruber · Markus Holzleitner · Johannes Brandstetter · Geir Kjetil Sandve · Victor Greiff · Sepp Hochreiter · Günter Klambauer -
2020 Spotlight: Modern Hopfield Networks and Attention for Immune Repertoire Classification »
Michael Widrich · Bernhard Schäfl · Milena Pavlović · Hubert Ramsauer · Lukas Gruber · Markus Holzleitner · Johannes Brandstetter · Geir Kjetil Sandve · Victor Greiff · Sepp Hochreiter · Günter Klambauer -
2017 : Self-Normalizing Neural Networks »
Thomas Unterthiner -
2017 : Invited Talk 3 »
Sepp Hochreiter -
2017 : Panel: Machine learning and audio signal processing: State of the art and future perspectives »
Sepp Hochreiter · Bo Li · Karen Livescu · Arindam Mandal · Oriol Nieto · Malcolm Slaney · Hendrik Purwins -
2017 : Poster session 1 »
Van-Doan Nguyen · Stephan Eismann · Haozhen Wu · Garrett Goh · Kristina Preuer · Thomas Unterthiner · Matthew Ragoza · Tien-Lam PHAM · Günter Klambauer · Andrea Rocchetto · Maxwell Hutchinson · Qian Yang · Rafael Gomez-Bombarelli · Sheshera Mysore · Brooke Husic · Ryan-Rhys Griffiths · Masashi Tsubaki · Emma Strubell · Philippe Schwaller · Théophile Gaudin · Michael Brenner · Li Li -
2017 Poster: GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium »
Martin Heusel · Hubert Ramsauer · Thomas Unterthiner · Bernhard Nessler · Sepp Hochreiter -
2016 Symposium: Recurrent Neural Networks and Other Machines that Learn Algorithms »
Jürgen Schmidhuber · Sepp Hochreiter · Alex Graves · Rupesh K Srivastava -
2015 Poster: Rectified Factor Networks »
Djork-Arné Clevert · Andreas Mayr · Thomas Unterthiner · Sepp Hochreiter -
2014 Workshop: Second Workshop on Transfer and Multi-Task Learning: Theory meets Practice »
Urun Dogan · Tatiana Tommasi · Yoshua Bengio · Francesco Orabona · Marius Kloft · Andres Munoz · Gunnar Rätsch · Hal Daumé III · Mehryar Mohri · Xuezhi Wang · Daniel Hernández-lobato · Song Liu · Thomas Unterthiner · Pascal Germain · Vinay P Namboodiri · Michael Goetz · Christopher Berlind · Sigurd Spieckermann · Marta Soare · Yujia Li · Vitaly Kuznetsov · Wenzhao Lian · Daniele Calandriello · Emilie Morvant -
2014 Workshop: Representation and Learning Methods for Complex Outputs »
Richard Zemel · Dale Schuurmans · Kilian Q Weinberger · Yuhong Guo · Jia Deng · Francesco Dinuzzo · Hal Daumé III · Honglak Lee · Noah A Smith · Richard Sutton · Jiaqian YU · Vitaly Kuznetsov · Luke Vilnis · Hanchen Xiong · Calvin Murdock · Thomas Unterthiner · Jean-Francis Roy · Martin Renqiang Min · Hichem SAHBI · Fabio Massimo Zanzotto