Timezone: »
Imagine trying to track one particular fruitfly in a swarm of hundreds. Higher biological visual systems have evolved to track moving objects by relying on both their appearance and their motion trajectories. We investigate if state-of-the-art spatiotemporal deep neural networks are capable of the same. For this, we introduce PathTracker, a synthetic visual challenge that asks human observers and machines to track a target object in the midst of identical-looking "distractor" objects. While humans effortlessly learn PathTracker and generalize to systematic variations in task design, deep networks struggle. To address this limitation, we identify and model circuit mechanisms in biological brains that are implicated in tracking objects based on motion cues. When instantiated as a recurrent network, our circuit model learns to solve PathTracker with a robust visual strategy that rivals human performance and explains a significant proportion of their decision-making on the challenge. We also show that the success of this circuit model extends to object tracking in natural videos. Adding it to a transformer-based architecture for object tracking builds tolerance to visual nuisances that affect object appearance, establishing the new state of the art on the large-scale TrackingNet challenge. Our work highlights the importance of understanding human vision to improve computer vision.
Author Information
Drew Linsley (Brown University)
We need artificial vision to create intelligent machines that can reason about the world, but existing artificial vision systems cannot solve many of the visual challenges that we encounter and routinely solve in our daily lives. I look to biological vision to inspire new solutions to challenges faced by artificial vision. I do this by testing complementary hypotheses that connect computational theory with systems- and cognitive-neuroscience level experimental research: - Computational challenges for artificial vision can be identified through systematic comparisons with biological vision, and solved with algorithms inspired by those of biological vision. - Improved algorithms for artificial vision will lead to better methods for gleaning insight from large-scale experimental data, and better models for understanding the relationship between neural computation and perception.
Girik Malik (Northeastern University)
Junkyung Kim (DeepMind)
Lakshmi Narasimhan Govindarajan (Brown University)
Ennio Mingolla (Northeastern University)
Thomas Serre (Brown University)
More from the Same Authors
-
2022 : Transformers generalize differently from information stored in context vs in weights »
Stephanie Chan · Ishita Dasgupta · Junkyung Kim · Dharshan Kumaran · Andrew Lampinen · Felix Hill -
2022 : The emergence of visual simulation in task-optimized recurrent neural networks »
Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Drew Linsley · David Sheinberg · Thomas Serre -
2022 Poster: Meta-Reinforcement Learning with Self-Modifying Networks »
Mathieu Chalvidal · Thomas Serre · Rufin VanRullen -
2022 Poster: Explainability Via Causal Self-Talk »
Nicholas Roy · Junkyung Kim · Neil Rabinowitz -
2022 Poster: A Benchmark for Compositional Visual Reasoning »
Aimen Zerroug · Mohit Vaishnav · Julien Colin · Sebastian Musslick · Thomas Serre -
2022 Poster: Diversity vs. Recognizability: Human-like generalization in one-shot generative models »
Victor Boutin · Lakshya Singhal · Xavier Thomas · Thomas Serre -
2022 Poster: Harmonizing the object recognition strategies of deep neural networks with humans »
Thomas FEL · Ivan F Rodriguez Rodriguez · Drew Linsley · Thomas Serre -
2022 Poster: What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods »
Julien Colin · Thomas FEL · Remi Cadene · Thomas Serre -
2021 Poster: Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis »
Thomas FEL · Remi Cadene · Mathieu Chalvidal · Matthieu Cord · David Vigouroux · Thomas Serre -
2020 Poster: Stable and expressive recurrent vision models »
Drew Linsley · Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Rex Liu · Thomas Serre -
2020 Spotlight: Stable and expressive recurrent vision models »
Drew Linsley · Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Rex Liu · Thomas Serre -
2020 Session: Orals & Spotlights Track 29: Neuroscience »
Aasa Feragen · Thomas Serre -
2018 Poster: Learning long-range spatial dependencies with horizontal gated recurrent units »
Drew Linsley · Junkyung Kim · Vijay Veerabadran · Charles Windolf · Thomas Serre -
2016 Poster: How Deep is the Feature Analysis underlying Rapid Visual Categorization? »
Sven Eberhardt · Jonah G Cader · Thomas Serre -
2013 Poster: Neural representation of action sequences: how far can a simple snippet-matching model take us? »
Cheston Tan · Jedediah M Singer · Thomas Serre · David Sheinberg · Tomaso Poggio