Timezone: »
A fundamental component of human vision is our ability to parse complex visual scenes and judge the relations between their constituent objects. AI benchmarks for visual reasoning have driven rapid progress in recent years with state-of-the-art systems now reaching human accuracy on some of these benchmarks. Yet, there remains a major gap between humans and AI systems in terms of the sample efficiency with which they learn new visual reasoning tasks. Humans' remarkable efficiency at learning has been at least partially attributed to their ability to harness compositionality -- allowing them to efficiently take advantage of previously gained knowledge when learning new tasks. Here, we introduce a novel visual reasoning benchmark, Compositional Visual Relations (CVR), to drive progress towards the development of more data-efficient learning algorithms. We take inspiration from fluidic intelligence and non-verbal reasoning tests and describe a novel method for creating compositions of abstract rules and generating image datasets corresponding to these rules at scale. Our proposed benchmark includes measures of sample efficiency, generalization, compositionality, and transfer across task rules. We systematically evaluate modern neural architectures and find that convolutional architectures surpass transformer-based architectures across all performance measures in most data regimes. However, all computational models are much less data efficient than humans, even after learning informative visual representations using self-supervision. Overall, we hope our challenge will spur interest in developing neural architectures that can learn to harness compositionality for more efficient learning.
Author Information
Aimen Zerroug (ANITI - Brown University)
Mohit Vaishnav (ANITI)
Julien Colin (Brown University, ELLIS Alicante)
Sebastian Musslick
Thomas Serre (Brown University)
More from the Same Authors
-
2022 : The emergence of visual simulation in task-optimized recurrent neural networks »
Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Drew Linsley · David Sheinberg · Thomas Serre -
2023 Poster: Break It Down: Evidence for Structural Compositionality in Neural Networks »
Michael Lepori · Thomas Serre · Ellie Pavlick -
2023 Poster: Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex »
Drew Linsley · Ivan F Rodriguez Rodriguez · Thomas FEL · Michael Arcaro · Saloni Sharma · Margaret Livingstone · Thomas Serre -
2023 Poster: A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation »
Thomas FEL · Victor Boutin · Louis Béthune · Remi Cadene · Mazda Moayeri · Léo Andéol · Mathieu Chalvidal · Thomas Serre -
2023 Poster: Unlocking Feature Visualization for Deep Network with MAgnitude Constrained Optimization »
Thomas FEL · Thibaut Boissin · Victor Boutin · Agustin PICARD · Paul Novello · Julien Colin · Drew Linsley · Tom ROUSSEAU · Remi Cadene · Laurent Gardes · Thomas Serre -
2023 Poster: Learning Functional Transduction »
Mathieu Chalvidal · Thomas Serre · Rufin VanRullen -
2023 Poster: Computing a human-like reaction time metric from stable recurrent vision models »
Lore Goetschalckx · Lakshmi Narasimhan Govindarajan · Alekh Karkada Ashok · Thomas Serre -
2022 Poster: Meta-Reinforcement Learning with Self-Modifying Networks »
Mathieu Chalvidal · Thomas Serre · Rufin VanRullen -
2022 Poster: Diversity vs. Recognizability: Human-like generalization in one-shot generative models »
Victor Boutin · Lakshya Singhal · Xavier Thomas · Thomas Serre -
2022 Poster: Harmonizing the object recognition strategies of deep neural networks with humans »
Thomas FEL · Ivan F Rodriguez Rodriguez · Drew Linsley · Thomas Serre -
2022 Poster: What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods »
Julien Colin · Thomas FEL · Remi Cadene · Thomas Serre -
2021 Poster: Tracking Without Re-recognition in Humans and Machines »
Drew Linsley · Girik Malik · Junkyung Kim · Lakshmi Narasimhan Govindarajan · Ennio Mingolla · Thomas Serre -
2021 Poster: Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis »
Thomas FEL · Remi Cadene · Mathieu Chalvidal · Matthieu Cord · David Vigouroux · Thomas Serre -
2020 Poster: Stable and expressive recurrent vision models »
Drew Linsley · Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Rex Liu · Thomas Serre -
2020 Spotlight: Stable and expressive recurrent vision models »
Drew Linsley · Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Rex Liu · Thomas Serre -
2020 Session: Orals & Spotlights Track 29: Neuroscience »
Aasa Feragen · Thomas Serre -
2018 Poster: Learning long-range spatial dependencies with horizontal gated recurrent units »
Drew Linsley · Junkyung Kim · Vijay Veerabadran · Charles Windolf · Thomas Serre -
2017 Poster: A graph-theoretic approach to multitasking »
Noga Alon · Daniel Reichman · Igor Shinkar · Tal Wagner · Sebastian Musslick · Jonathan D Cohen · Tom Griffiths · Biswadip dey · Kayhan Ozcimder -
2017 Oral: A graph-theoretic approach to multitasking »
Noga Alon · Daniel Reichman · Igor Shinkar · Tal Wagner · Sebastian Musslick · Jonathan D Cohen · Tom Griffiths · Biswadip dey · Kayhan Ozcimder -
2016 Poster: How Deep is the Feature Analysis underlying Rapid Visual Categorization? »
Sven Eberhardt · Jonah G Cader · Thomas Serre -
2013 Poster: Neural representation of action sequences: how far can a simple snippet-matching model take us? »
Cheston Tan · Jedediah M Singer · Thomas Serre · David Sheinberg · Tomaso Poggio