Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Memory in Artificial and Real Intelligence (MemARI)

Training language models for deeper understanding improves brain alignment

Khai Loong Aw · Mariya Toneva

Keywords: [ interpretability ] [ language ] [ NLP ] [ fMRI ] [ Neuroscience ]


Abstract:

Building systems that understand information across long contexts is one important goal in natural language processing (NLP). One approach in recent works is to scale up model architectures to accept very long inputs and then train them on datasets to learn to extract critical information from long input texts. However, it is still an open question whether these models are simply learning a heuristic to solve the tasks, or really learning to understand information across long contexts. This work investigates this further by turning to the one system with truly long-range and deep language understanding: the human brain. We show that training language models for long-range narrative understanding results in richer representations that have improved alignment to human brain activity. This suggests they have indeed improved understanding across long contexts. However, although these models can take in thousands of input words, their brain alignment peaks after only 500 words. This suggests possible limitations with either model training or architecture. Overall, our findings have consequences both for cognitive neuroscience by revealing some of the significant factors behind brain-NLP alignment, and for NLP by highlighting limitations with existing approaches for longer-range understanding.

Chat is not available.