Poster
A simplicity bias in the learning dynamics of transformers
Riccardo Rende · Federica Gerace · Alessandro Laio · Sebastian Goldt
East Exhibit Hall A-C #2208
The remarkable capability of over-parameterised neural networks to generalise effectively has been explained by invoking a ``simplicity bias'': neural networks prevent overfitting by initially learning simple classifiers before progressing to more complex, non-linear functions. While simplicity biases have been described theoretically and experimentally in feed-forward networks for supervised learning, the extent to which they also explains the remarkable success of transformers trained with self-supervised techniques remains unclear. In our study, we demonstrate that BERT-style transformers, trained using Masked Language Modelling on natural language data, also display a simplicity bias. Specifically, they sequentially learn many-body interactions among input tokens, reaching a saturation point in the prediction error for low-degree interactions while continuing to learn high-degree interactions. To conduct this analysis, we develop a procedure to generate \textit{clones} of a given natural language data set, which capture the interactions between tokens up to a specified order. This approach opens up the possibilities of studying how interactions of different orders in the data affect learning, in natural language processing and beyond.
Live content is unavailable. Log in and register to view live content