Skip to yearly menu bar Skip to main content

Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM)

The emergence of visual simulation in task-optimized recurrent neural networks

Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Drew Linsley · David Sheinberg · Thomas Serre


Primates display remarkable prowess in making rapid visual inferences even when sensory inputs are impoverished. One hypothesis about how they accomplish this is through a process called visual simulation, in which they imagine future states of their environment using a constructed mental model. Though a growing body of behavioral findings, in both humans and non-human primates, provides credence to this hypothesis, the computational mechanisms underlying this ability remain poorly understood. In this study, we probe the capability of feedforward and recurrent neural network models to solve the Planko task, parameterized to systematically control task variability. We demonstrate that visual simulation emerges as the optimal computational strategy in deep neural networks only when task variability is high. Moreover, we provide some of the first evidence that information about imaginary future states can be decoded from the model latent representations, despite no explicit supervision. Taken together, our work suggests that the optimality of visual simulation is task-specific and provides a framework to test its mechanistic basis.

Chat is not available.