Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Heavy Tails in ML: Structure, Stability, Dynamics

Deep neural networks with dependent weights: \\Gaussian Process mixture limit, heavy tails, sparsity and compressibility

Hoil Lee · Fadhel Ayed · Paul Jung · Juho Lee · Hongseok Yang · Francois Caron

Keywords: [ regular variation ] [ infinite-width ] [ infinite divisibility ] [ triangular arrays ] [ compressibility ] [ Gaussian process ] [ deep neural network ] [ Sparsity ] [ Pruning ]


Abstract:

This work studies the infinite-width limit of deep feedforward neural networks whose weights are dependent, and modelled via a mixture of Gaussian distributions. Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a L\'evy measure on the positive reals. If the scalar parameters are strictly positive and the L\'evy measures are trivial at all hidden layers, then one recovers the classical Gaussian process (GP) limit, obtained with iid Gaussian weights. More interestingly, if the L\'evy measure of at least one layer is non-trivial, we obtain a mixture of Gaussian processes (MoGP) in the large-width limit. The behaviour of the neural network in this regime is very different from the GP regime. One obtains correlated outputs, with non-Gaussian distributions, possibly with heavy tails. We illustrate some of the benefits of the MoGP regime over the GP regime in terms of representation learning and compressibility on simulated, MNIST and Fashion MNIST datasets.

Chat is not available.