Timezone: »

Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data
Armin Thomas · Christopher Ré · Russell Poldrack

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #215

Self-supervised learning techniques are celebrating immense success in natural language processing (NLP) by enabling models to learn from broad language data at unprecedented scales. Here, we aim to leverage the success of these techniques for mental state decoding, where researchers aim to identify specific mental states (e.g., the experience of anger or joy) from brain activity. To this end, we devise a set of novel self-supervised learning frameworks for neuroimaging data inspired by prominent learning frameworks in NLP. At their core, these frameworks learn the dynamics of brain activity by modeling sequences of activity akin to how sequences of text are modeled in NLP. We evaluate the frameworks by pre-training models on a broad neuroimaging dataset spanning functional Magnetic Resonance Imaging data from 11,980 experimental runs of 1,726 individuals across 34 datasets, and subsequently adapting the pre-trained models to benchmark mental state decoding datasets. The pre-trained models transfer well, generally outperforming baseline models trained from scratch, while models trained in a learning framework based on causal language modeling clearly outperform the others.

Author Information

Armin Thomas (Stanford University)
Armin Thomas

I am a Ram and Vijay Shriram Data Science Fellow at Stanford University, where I work with machine learning, data science, and psychology researchers on state-of-the-art AI tools that can help understand human behaviour and brain activity, focused on high-dimensional time-series.

Christopher Ré (Stanford)
Russell Poldrack (Stanford University)

More from the Same Authors