Skip to yearly menu bar Skip to main content


Poster

On the impact of activation and normalization in obtaining isometric embeddings at initialization

Amir Joudaki · Hadi Daneshmand · Francis Bach

Great Hall & Hall B1+B2 (level 1) #908
[ ]
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

In this paper, we explore the structure of the penultimate Gram matrix in deep neural networks, which contains the pairwise inner products of outputs corresponding to a batch of inputs. In several architectures it has been observed that this Gram matrix becomes degenerate with depth at initialization, which dramatically slows training. Normalization layers, such as batch or layer normalization, play a pivotal role in preventing the rank collapse issue. Despite promising advances, the existing theoretical results do not extend to layer normalization, which is widely used in transformers, and can not quantitatively characterize the role of non-linear activations. To bridge this gap, we prove that layer normalization, in conjunction with activation layers, biases the Gram matrix of a multilayer perceptron towards the identity matrix at an exponential rate with depth at initialization. We quantify this rate using the Hermite expansion of the activation function.

Chat is not available.