Skip to yearly menu bar Skip to main content


Incorporating Context into Language Encoding Models for fMRI

Shailee Jain · Alexander Huth

Room 210 #99

Keywords: [ Natural Language Processing ] [ Neuroscience ] [ Brain Imaging ] [ Brain Mapping ] [ Language for Cognitive Science ]


Language encoding models help explain language processing in the human brain by learning functions that predict brain responses from the language stimuli that elicited them. Current word embedding-based approaches treat each stimulus word independently and thus ignore the influence of context on language understanding. In this work we instead build encoding models using rich contextual representations derived from an LSTM language model. Our models show a significant improvement in encoding performance relative to state-of-the-art embeddings in nearly every brain area. By varying the amount of context used in the models and providing the models with distorted context, we show that this improvement is due to a combination of better word embeddings learned by the LSTM language model and contextual information. We are also able to use our models to map context sensitivity across the cortex. These results suggest that LSTM language models learn high-level representations that are related to representations in the human brain.

Live content is unavailable. Log in and register to view live content