Timezone: »
Primates display remarkable prowess in making rapid visual inferences even when sensory inputs are impoverished. One hypothesis about how they accomplish this is through a process called visual simulation, in which they imagine future states of their environment using a constructed mental model. Though a growing body of behavioral findings, in both humans and non-human primates, provides credence to this hypothesis, the computational mechanisms underlying this ability remain poorly understood. In this study, we probe the capability of feedforward and recurrent neural network models to solve the Planko task, parameterized to systematically control task variability. We demonstrate that visual simulation emerges as the optimal computational strategy in deep neural networks only when task variability is high. Moreover, we provide some of the first evidence that information about imaginary future states can be decoded from the model latent representations, despite no explicit supervision. Taken together, our work suggests that the optimality of visual simulation is task-specific and provides a framework to test its mechanistic basis.
Author Information
Alekh Karkada Ashok (Brown University)
Lakshmi Narasimhan Govindarajan (Brown University)
Drew Linsley (Brown University)
We need artificial vision to create intelligent machines that can reason about the world, but existing artificial vision systems cannot solve many of the visual challenges that we encounter and routinely solve in our daily lives. I look to biological vision to inspire new solutions to challenges faced by artificial vision. I do this by testing complementary hypotheses that connect computational theory with systems- and cognitive-neuroscience level experimental research: 1. Computational challenges for artificial vision can be identified through systematic comparisons with biological vision, and solved with algorithms inspired by those of biological vision. 2. Improved algorithms for artificial vision will lead to better methods for gleaning insight from large-scale experimental data, and better models for understanding the relationship between neural computation and perception.
David Sheinberg (Brown University)
Thomas Serre (Brown University)
More from the Same Authors
-
2023 Poster: Break It Down: Evidence for Structural Compositionality in Neural Networks »
Michael Lepori · Thomas Serre · Ellie Pavlick -
2023 Poster: Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex »
Drew Linsley · Ivan F Rodriguez Rodriguez · Thomas FEL · Michael Arcaro · Saloni Sharma · Margaret Livingstone · Thomas Serre -
2023 Poster: A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation »
Thomas FEL · Victor Boutin · Louis Béthune · Remi Cadene · Mazda Moayeri · Léo Andéol · Mathieu Chalvidal · Thomas Serre -
2023 Poster: Unlocking Feature Visualization for Deep Network with MAgnitude Constrained Optimization »
Thomas FEL · Thibaut Boissin · Victor Boutin · Agustin PICARD · Paul Novello · Julien Colin · Drew Linsley · Tom ROUSSEAU · Remi Cadene · Laurent Gardes · Thomas Serre -
2023 Poster: Learning Functional Transduction »
Mathieu Chalvidal · Thomas Serre · Rufin VanRullen -
2023 Poster: Computing a human-like reaction time metric from stable recurrent vision models »
Lore Goetschalckx · Lakshmi Narasimhan Govindarajan · Alekh Karkada Ashok · Thomas Serre -
2022 Poster: Meta-Reinforcement Learning with Self-Modifying Networks »
Mathieu Chalvidal · Thomas Serre · Rufin VanRullen -
2022 Poster: A Benchmark for Compositional Visual Reasoning »
Aimen Zerroug · Mohit Vaishnav · Julien Colin · Sebastian Musslick · Thomas Serre -
2022 Poster: Diversity vs. Recognizability: Human-like generalization in one-shot generative models »
Victor Boutin · Lakshya Singhal · Xavier Thomas · Thomas Serre -
2022 Poster: Harmonizing the object recognition strategies of deep neural networks with humans »
Thomas FEL · Ivan F Rodriguez Rodriguez · Drew Linsley · Thomas Serre -
2022 Poster: What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods »
Julien Colin · Thomas FEL · Remi Cadene · Thomas Serre -
2021 Poster: Tracking Without Re-recognition in Humans and Machines »
Drew Linsley · Girik Malik · Junkyung Kim · Lakshmi Narasimhan Govindarajan · Ennio Mingolla · Thomas Serre -
2021 Poster: Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis »
Thomas FEL · Remi Cadene · Mathieu Chalvidal · Matthieu Cord · David Vigouroux · Thomas Serre -
2020 Poster: Stable and expressive recurrent vision models »
Drew Linsley · Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Rex Liu · Thomas Serre -
2020 Spotlight: Stable and expressive recurrent vision models »
Drew Linsley · Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Rex Liu · Thomas Serre -
2020 Session: Orals & Spotlights Track 29: Neuroscience »
Aasa Feragen · Thomas Serre -
2018 Poster: Learning long-range spatial dependencies with horizontal gated recurrent units »
Drew Linsley · Junkyung Kim · Vijay Veerabadran · Charles Windolf · Thomas Serre -
2016 Poster: How Deep is the Feature Analysis underlying Rapid Visual Categorization? »
Sven Eberhardt · Jonah G Cader · Thomas Serre -
2013 Poster: Neural representation of action sequences: how far can a simple snippet-matching model take us? »
Cheston Tan · Jedediah M Singer · Thomas Serre · David Sheinberg · Tomaso Poggio