Skip to yearly menu bar Skip to main content


Poster

Understanding and Minimising Outlier Features in Neural Network Training

Bobby He · Lorenzo Noci · Daniele Paliotta · Imanol Schlag · Thomas Hofmann

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Outlier Features (OF) are neurons whose activation magnitudes significantly exceed the average over a neural network's (NN) width. They are well known to emerge during standard transformer training and have the undesirable effect of hindering quantisation in afflicted models. Despite their practical importance, little is known behind why OFs emerge during training, nor how one can minimise them.Our work focuses on the above questions, first identifying several quantitative metrics, such as the kurtosis over neuron activation norms, to measure OFs. With these metrics, we study how architectural and optimisation choices influence OFs, and provide practical insights to minimise OFs during training. As highlights, we emphasise the importance of controlling signal propagation throughout training, and propose the Outlier Protected transformer block, which removes standard Pre-Norm layers to mitigate OFs, without loss of convergence speed or training stability. Overall, our findings shed new light on our understanding of, our ability to prevent, and the complexity of this important facet in NN training dynamics.

Live content is unavailable. Log in and register to view live content