Skip to yearly menu bar Skip to main content


Poster

Neural Taskonomy: Inferring the Similarity of Task-Derived Representations from Brain Activity

Aria Wang · Michael Tarr · Leila Wehbe

East Exhibition Hall B + C #154

Keywords: [ Brain Mapping ] [ Visual Perception ] [ Neuroscience and Cognitive Science ]


Abstract:

Convolutional neural networks (CNNs) trained for object classification have been widely used to account for visually-driven neural responses in both human and primate brains. However, because of the generality and complexity of object classification, despite the effectiveness of CNNs in predicting brain activity, it is difficult to draw specific inferences about neural information processing using CNN-derived representations. To address this problem, we used learned representations drawn from 21 computer vision tasks to construct encoding models for predicting brain responses from BOLD5000---a large-scale dataset comprised of fMRI scans collected while observers viewed over 5000 naturalistic scene and object images. Encoding models based on task features predict activity in different regions across the whole brain. Features from 3D tasks such as keypoint/edge detection explain greater variance compared to 2D tasks---a pattern observed across the whole brain. Using results across all 21 task representations, we constructed a ``task graph’’ based on the spatial layout of well-predicted brain areas from each task. A comparison of this brain-derived task structure to the task structure derived from transfer learning accuracy demonstrate that tasks with higher transferability make similar predictions for brain responses from different regions. These results---arising out of state-of-the-art computer vision methods---help reveal the task-specific architecture of the human visual system.

Live content is unavailable. Log in and register to view live content