`

Timezone: »

 
Poster
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Schwarz · Siddhant M Jayakumar · Razvan Pascanu · Peter E Latham · Yee Teh

Thu Dec 09 12:30 AM -- 02:00 AM (PST) @ None #None

The training of sparse neural networks is becoming an increasingly important tool for reducing the computational footprint of models at training and evaluation, as well enabling the effective scaling up of models. Whereas much work over the years has been dedicated to specialised pruning techniques, little attention has been paid to the inherent effect of gradient based training on model sparsity. Inthis work, we introduce Powerpropagation, a new weight-parameterisation for neural networks that leads to inherently sparse models. Exploiting the behaviour of gradient descent, our method gives rise to weight updates exhibiting a “rich get richer” dynamic, leaving low-magnitude parameters largely unaffected by learning. Models trained in this manner exhibit similar performance, but have a distributionwith markedly higher density at zero, allowing more parameters to be pruned safely. Powerpropagation is general, intuitive, cheap and straight-forward to implement and can readily be combined with various other techniques. To highlight its versatility, we explore it in two very different settings: Firstly, following a recent line of work, we investigate its effect on sparse training for resource-constrained settings. Here, we combine Powerpropagation with a traditional weight-pruning technique as well as recent state-of-the-art sparse-to-sparse algorithms, showing superior performance on the ImageNet benchmark. Secondly, we advocate the useof sparsity in overcoming catastrophic forgetting, where compressed representations allow accommodating a large number of tasks at fixed model capacity. In all cases our reparameterisation considerably increases the efficacy of the off-the-shelf methods.

Author Information

Jonathan Schwarz (DeepMind & Gatsby Unit, UCL)
Sid M Jayakumar (Google DeepMind)
Razvan Pascanu (Google DeepMind)
Peter E Latham (Gatsby Unit, UCL)
Yee Teh (DeepMind)

More from the Same Authors