Timezone: »
Humans learn from visual inputs at multiple timescales, both rapidly and flexibly acquiring visual knowledge over short periods, and robustly accumulating online learning progress over longer periods. Modeling these powerful learning capabilities is an important problem for computational visual cognitive science, and models that could replicate them would be of substantial utility in real-world computer vision settings. In this work, we establish benchmarks for both real-time and life-long continual visual learning. Our real-time learning benchmark measures a model's ability to match the rapid visual behavior changes of real humans over the course of minutes and hours, given a stream of visual inputs. Our life-long learning benchmark evaluates the performance of models in a purely online learning curriculum obtained directly from child visual experience over the course of years of development. We evaluate a spectrum of recent deep self-supervised visual learning algorithms on both benchmarks, finding that none of them perfectly match human performance, though some algorithms perform substantially better than others. Interestingly, algorithms embodying recent trends in self-supervised learning -- including BYOL, SwAV and MAE -- are substantially worse on our benchmarks than an earlier generation of self-supervised algorithms such as SimCLR and MoCo-v2. We present analysis indicating that the failure of these newer algorithms is primarily due to their inability to handle the kind of sparse low-diversity datastreams that naturally arise in the real world, and that actively leveraging memory through negative sampling -- a mechanism eschewed by these newer algorithms -- appears useful for facilitating learning in such low-diversity environments. We also illustrate a complementarity between the short and long timescales in the two benchmarks, showing how requiring a single learning algorithm to be locally context-sensitive enough to match real-time learning changes while stable enough to avoid catastrophic forgetting over the long term induces a trade-off that human-like algorithms may have to straddle. Taken together, our benchmarks establish a quantitative way to directly compare learning between neural networks models and human learners, show how choices in the mechanism by which such algorithms handle sample comparison and memory strongly impact their ability to match human learning abilities, and expose an open problem space for identifying more flexible and robust visual self-supervision algorithms.
Author Information
Chengxu Zhuang (MIT)
Ziyu Xiang (Stanford University)
Yoon Bai (Massachusetts Institute of Technology)
Xiaoxuan Jia (Tsinghua University)
Nicholas Turk-Browne (Yale University)
Kenneth Norman
James J DiCarlo (Massachusetts Institute of Technology)
Prof. DiCarlo received his Ph.D. in biomedical engineering and his M.D. from Johns Hopkins in 1998, and did his postdoctoral training in primate visual neurophysiology at Baylor College of Medicine. He joined the MIT faculty in 2002. He is a Sloan Fellow, a Pew Scholar, and a McKnight Scholar. His lab’s research goal is a computational understanding of the brain mechanisms that underlie object recognition. They use large-scale neurophysiology, brain imaging, optogenetic methods, and high-throughput computational simulations to understand how the primate ventral visual stream is able to untangle object identity from other latent image variables such as object position, scale, and pose. They have shown that populations of neurons at the highest cortical visual processing stage (IT) rapidly convey explicit representations of object identity, and that this ability is reshaped by natural visual experience. They have also shown how visual recognition tests can be used to discover new, high-performing bio-inspired algorithms. This understanding may inspire new machine vision systems, new neural prosthetics, and a foundation for understanding how high-level visual representation is altered in conditions such as agnosia, autism and dyslexia.
Dan Yamins
More from the Same Authors
-
2021 : ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation »
Chuang Gan · Jeremy Schwartz · Seth Alter · Damian Mrowca · Martin Schrimpf · James Traer · Julian De Freitas · Jonas Kubilius · Abhishek Bhandwaldar · Nick Haber · Megumi Sano · Kuno Kim · Elias Wang · Michael Lingelbach · Aidan Curtis · Kevin Feigelis · Daniel Bear · Dan Gutfreund · David Cox · Antonio Torralba · James J DiCarlo · Josh Tenenbaum · Josh McDermott · Dan Yamins -
2021 : Physion: Evaluating Physical Prediction from Vision in Humans and Machines »
Daniel Bear · Elias Wang · Damian Mrowca · Felix Binder · Hsiao-Yu Tung · Pramod RT · Cameron Holdaway · Sirui Tao · Kevin Smith · Fan-Yun Sun · Fei-Fei Li · Nancy Kanwisher · Josh Tenenbaum · Dan Yamins · Judith Fan -
2021 Spotlight: Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks »
Aran Nayebi · Alexander Attinger · Malcolm Campbell · Kiah Hardcastle · Isabel Low · Caitlin S Mallory · Gabriel Mel · Ben Sorscher · Alex H Williams · Surya Ganguli · Lisa Giocomo · Dan Yamins -
2022 : Measuring the Alignment of ANNs and Primate V1 on Luminance and Contrast Response Characteristics »
Stephanie Olaiya · Tiago Marques · James J DiCarlo -
2022 : Implementing Divisive Normalization in CNNs Improves Robustness to Common Image Corruptions »
Andrew Cirincione · Reginald Verrier · Artiom Bic · Stephanie Olaiya · James J DiCarlo · Lawrence Udeigwe · Tiago Marques -
2022 : Primate Inferotemporal Cortex Neurons Generalize Better to Novel Image Distributions Than Analogous Deep Neural Networks Units »
Marliawaty I Gusti Bagus · Tiago Marques · Sachi Sanghavi · James J DiCarlo · Martin Schrimpf -
2022 : Topographic DCNNs trained on a single self-supervised task capture the functional organization of cortex into visual processing streams »
Dawn Finzi · Eshed Margalit · Kendrick Kay · Dan Yamins · Kalanit Grill-Spector -
2023 Poster: Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors »
Paul Scotti · Atmadeep Banerjee · Jimmie Goode · Stepan Shabalin · Alex Nguyen · ethan cohen · Aidan Dempster · Nathalie Verlinde · Elad Yundler · David Weisberg · Tanishq Abraham · Kenneth Norman -
2023 Poster: 3D-IntPhys: Towards More Generalized 3D-grounded Visual Intuitive Physics under Challenging Scenes »
Haotian Xue · Antonio Torralba · Josh Tenenbaum · Dan Yamins · Yunzhu Li · Hsiao-Yu Tung -
2023 Poster: Robustified ANNs Reveal Wormholes Between Human Category Percepts »
Guy Gaziv · Michael Lee · James J DiCarlo -
2023 Poster: Platonic Distance: Intrinsic Object-Centric Image Similarity »
Klemen Kotar · Stephen Tian · Hong-Xing Yu · Dan Yamins · Jiajun Wu -
2023 Poster: Physion++: Evaluating Physical Scene Understanding that Requires Online Inference of Different Physical Properties »
Hsiao-Yu Tung · Mingyu Ding · Zhenfang Chen · Daniel Bear · Chuang Gan · Josh Tenenbaum · Dan Yamins · Judith Fan · Kevin Smith -
2022 : A report on recent experimental tests of two predictions of contemporary computable models of the biological deep neural network underling primate visual intelligence »
James J DiCarlo -
2022 : Panel Discussion: Opportunities and Challenges »
Kenneth Norman · Janice Chen · Samuel J Gershman · Albert Gu · Sepp Hochreiter · Ida Momennejad · Hava Siegelmann · Sainbayar Sukhbaatar -
2022 Workshop: Memory in Artificial and Real Intelligence (MemARI) »
Mariya Toneva · Javier Turek · Vy Vo · Shailee Jain · Kenneth Norman · Alexander Huth · Uri Hasson · Mihai Capotă -
2021 : Combining Different V1 Brain Model Variants to Improve Robustness to Image Corruptions in CNNs »
Avinash Baidya · Joel Dapello · James J DiCarlo · Tiago Marques -
2021 Poster: Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks »
Aran Nayebi · Alexander Attinger · Malcolm Campbell · Kiah Hardcastle · Isabel Low · Caitlin S Mallory · Gabriel Mel · Ben Sorscher · Alex H Williams · Surya Ganguli · Lisa Giocomo · Dan Yamins -
2021 Poster: Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception »
Joel Dapello · Jenelle Feather · Hang Le · Tiago Marques · David Cox · Josh McDermott · James J DiCarlo · Sueyeon Chung -
2021 : ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation »
Chuang Gan · Jeremy Schwartz · Seth Alter · Damian Mrowca · Martin Schrimpf · James Traer · Julian De Freitas · Jonas Kubilius · Abhishek Bhandwaldar · Nick Haber · Megumi Sano · Kuno Kim · Elias Wang · Michael Lingelbach · Aidan Curtis · Kevin Feigelis · Daniel Bear · Dan Gutfreund · David Cox · Antonio Torralba · James J DiCarlo · Josh Tenenbaum · Josh McDermott · Dan Yamins -
2020 Poster: Uncovering the Topology of Time-Varying fMRI Data using Cubical Persistence »
Bastian Rieck · Tristan Yates · Christian Bock · Karsten Borgwardt · Guy Wolf · Nicholas Turk-Browne · Smita Krishnaswamy -
2020 Spotlight: Uncovering the Topology of Time-Varying fMRI Data using Cubical Persistence »
Bastian Rieck · Tristan Yates · Christian Bock · Karsten Borgwardt · Guy Wolf · Nicholas Turk-Browne · Smita Krishnaswamy -
2020 Poster: Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations »
Joel Dapello · Tiago Marques · Martin Schrimpf · Franziska Geiger · David Cox · James J DiCarlo -
2020 Spotlight: Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations »
Joel Dapello · Tiago Marques · Martin Schrimpf · Franziska Geiger · David Cox · James J DiCarlo -
2019 Poster: Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs »
Jonas Kubilius · Martin Schrimpf · Kohitij Kar · Rishi Rajalingham · Ha Hong · Najib Majaj · Elias Issa · Pouya Bashivan · Jonathan Prescott-Roy · Kailyn Schmidt · Aran Nayebi · Daniel Bear · Daniel Yamins · James J DiCarlo -
2019 Oral: Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs »
Jonas Kubilius · Martin Schrimpf · Ha Hong · Najib Majaj · Rishi Rajalingham · Elias Issa · Kohitij Kar · Pouya Bashivan · Jonathan Prescott-Roy · Kailyn Schmidt · Aran Nayebi · Daniel Bear · Daniel Yamins · James J DiCarlo -
2018 Poster: Task-Driven Convolutional Recurrent Models of the Visual System »
Aran Nayebi · Daniel Bear · Jonas Kubilius · Kohitij Kar · Surya Ganguli · David Sussillo · James J DiCarlo · Daniel Yamins -
2018 Poster: Flexible neural representation for physics prediction »
Damian Mrowca · Chengxu Zhuang · Elias Wang · Nick Haber · Li Fei-Fei · Josh Tenenbaum · Daniel Yamins -
2017 : Panel on "What neural systems can teach us about building better machine learning systems" »
Timothy Lillicrap · James J DiCarlo · Christopher Rozell · Viren Jain · Nathan Kutz · William Gray Roncal · Bingni Brunton -
2017 : Can brain data be used to reverse engineer the algorithms of human perception? »
James J DiCarlo -
2017 Oral: Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System »
Chengxu Zhuang · Jonas Kubilius · Mitra JZ Hartmann · Daniel Yamins -
2017 Poster: Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System »
Chengxu Zhuang · Jonas Kubilius · Mitra JZ Hartmann · Daniel Yamins -
2013 Poster: Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream »
Daniel L Yamins · Ha Hong · Charles Cadieu · James J DiCarlo -
2013 Tutorial: Mechanisms Underlying Visual Object Recognition: Humans vs. Neurons vs. Machines »
James J DiCarlo -
2009 Poster: A Bayesian Analysis of Dynamics in Free Recall »
Richard Socher · Samuel J Gershman · Adler Perotte · Per Sederberg · David Blei · Kenneth Norman