We identify and study two common failure modes for early training in deep ReLU nets. For each, we give a rigorous proof of when it occurs and how to avoid it, for fully connected, convolutional, and residual architectures. We show that the first failure mode, exploding or vanishing mean activation length, can be avoided by initializing weights from a symmetric distribution with variance 2/fan-in and, for ResNets, by correctly scaling the residual modules. We prove that the second failure mode, exponentially large variance of activation length, never occurs in residual nets once the first failure mode is avoided. In contrast, for fully connected nets, we prove that this failure mode can happen and is avoided by keeping constant the sum of the reciprocals of layer widths. We demonstrate empirically the effectiveness of our theoretical results in predicting when networks are able to start training. In particular, we note that many popular initializations fail our criteria, whereas correct initialization and architecture allows much deeper networks to be trained.
Boris Hanin (Texas A&M)
David Rolnick (University of Pennsylvania)
More from the Same Authors
2018 Poster: Which Neural Net Architectures Give Rise to Exploding and Vanishing Gradients? »
2017 : Morphological error detection for connectomics »
2017 : Poster Session 1 »
Magdalena Fuchs · David Lung · Mathias Lechner · Kezhi Li · Andrew Gordus · Vivek Venkatachalam · Shivesh Chaudhary · Jan Hůla · David Rolnick · Scott Linderman · Gonzalo Mena · Liam Paninski · Netta Cohen