Timezone: »
Current state-of-the-art object recognition models are largely based on convolutional neural network (CNN) architectures, which are loosely inspired by the primate visual system. However, these CNNs can be fooled by imperceptibly small, explicitly crafted perturbations, and struggle to recognize objects in corrupted images that are easily recognized by humans. Here, by making comparisons with primate neural data, we first observed that CNN models with a neural hidden layer that better matches primate primary visual cortex (V1) are also more robust to adversarial attacks. Inspired by this observation, we developed VOneNets, a new class of hybrid CNN vision models. Each VOneNet contains a fixed weight neural network front-end that simulates primate V1, called the VOneBlock, followed by a neural network back-end adapted from current CNN vision models. The VOneBlock is based on a classical neuroscientific model of V1: the linear-nonlinear-Poisson model, consisting of a biologically-constrained Gabor filter bank, simple and complex cell nonlinearities, and a V1 neuronal stochasticity generator. After training, VOneNets retain high ImageNet performance, but each is substantially more robust, outperforming the base CNNs and state-of-the-art methods by 18% and 3%, respectively, on a conglomerate benchmark of perturbations comprised of white box adversarial attacks and common image corruptions. Finally, we show that all components of the VOneBlock work in synergy to improve robustness. While current CNN architectures are arguably brain-inspired, the results presented here demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in ImageNet-level computer vision applications.
Author Information
Joel Dapello (Harvard University)
Tiago Marques (MIT)
Martin Schrimpf (MIT)
Franziska Geiger (MIT)
David Cox (MIT-IBM Watson AI Lab)
James J DiCarlo (Massachusetts Institute of Technology)
Prof. DiCarlo received his Ph.D. in biomedical engineering and his M.D. from Johns Hopkins in 1998, and did his postdoctoral training in primate visual neurophysiology at Baylor College of Medicine. He joined the MIT faculty in 2002. He is a Sloan Fellow, a Pew Scholar, and a McKnight Scholar. His lab’s research goal is a computational understanding of the brain mechanisms that underlie object recognition. They use large-scale neurophysiology, brain imaging, optogenetic methods, and high-throughput computational simulations to understand how the primate ventral visual stream is able to untangle object identity from other latent image variables such as object position, scale, and pose. They have shown that populations of neurons at the highest cortical visual processing stage (IT) rapidly convey explicit representations of object identity, and that this ability is reshaped by natural visual experience. They have also shown how visual recognition tests can be used to discover new, high-performing bio-inspired algorithms. This understanding may inspire new machine vision systems, new neural prosthetics, and a foundation for understanding how high-level visual representation is altered in conditions such as agnosia, autism and dyslexia.
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations »
Tue. Dec 8th 05:00 -- 07:00 AM Room Poster Session 0 #113
More from the Same Authors
-
2021 : ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation »
Chuang Gan · Jeremy Schwartz · Seth Alter · Damian Mrowca · Martin Schrimpf · James Traer · Julian De Freitas · Jonas Kubilius · Abhishek Bhandwaldar · Nick Haber · Megumi Sano · Kuno Kim · Elias Wang · Michael Lingelbach · Aidan Curtis · Kevin Feigelis · Daniel Bear · Dan Gutfreund · David Cox · Antonio Torralba · James J DiCarlo · Josh Tenenbaum · Josh McDermott · Dan Yamins -
2022 : Measuring the Alignment of ANNs and Primate V1 on Luminance and Contrast Response Characteristics »
Stephanie Olaiya · Tiago Marques · James J DiCarlo -
2022 : Implementing Divisive Normalization in CNNs Improves Robustness to Common Image Corruptions »
Andrew Cirincione · Reginald Verrier · Artiom Bic · Stephanie Olaiya · James J DiCarlo · Lawrence Udeigwe · Tiago Marques -
2022 : Primate Inferotemporal Cortex Neurons Generalize Better to Novel Image Distributions Than Analogous Deep Neural Networks Units »
Marliawaty I Gusti Bagus · Tiago Marques · Sachi Sanghavi · James J DiCarlo · Martin Schrimpf -
2022 : A report on recent experimental tests of two predictions of contemporary computable models of the biological deep neural network underling primate visual intelligence »
James J DiCarlo -
2022 Poster: How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning? »
Chengxu Zhuang · Ziyu Xiang · Yoon Bai · Xiaoxuan Jia · Nicholas Turk-Browne · Kenneth Norman · James J DiCarlo · Dan Yamins -
2021 : Combining Different V1 Brain Model Variants to Improve Robustness to Image Corruptions in CNNs »
Avinash Baidya · Joel Dapello · James J DiCarlo · Tiago Marques -
2021 Poster: Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception »
Joel Dapello · Jenelle Feather · Hang Le · Tiago Marques · David Cox · Josh McDermott · James J DiCarlo · Sueyeon Chung -
2021 : ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation »
Chuang Gan · Jeremy Schwartz · Seth Alter · Damian Mrowca · Martin Schrimpf · James Traer · Julian De Freitas · Jonas Kubilius · Abhishek Bhandwaldar · Nick Haber · Megumi Sano · Kuno Kim · Elias Wang · Michael Lingelbach · Aidan Curtis · Kevin Feigelis · Daniel Bear · Dan Gutfreund · David Cox · Antonio Torralba · James J DiCarlo · Josh Tenenbaum · Josh McDermott · Dan Yamins -
2020 : Closing Remarks »
David Cox · Alexander Gray -
2020 Expo Workshop: Perspectives on Neurosymbolic Artificial Intelligence Research »
Alexander Gray · David Cox · Luis Lastras -
2020 : Opening Remarks »
David Cox -
2019 Poster: More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation »
Quanfu Fan · Chun-Fu (Richard) Chen · Hilde Kuehne · Marco Pistoia · David Cox -
2019 Poster: Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs »
Jonas Kubilius · Martin Schrimpf · Kohitij Kar · Rishi Rajalingham · Ha Hong · Najib Majaj · Elias Issa · Pouya Bashivan · Jonathan Prescott-Roy · Kailyn Schmidt · Aran Nayebi · Daniel Bear · Daniel Yamins · James J DiCarlo -
2019 Oral: Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs »
Jonas Kubilius · Martin Schrimpf · Ha Hong · Najib Majaj · Rishi Rajalingham · Elias Issa · Kohitij Kar · Pouya Bashivan · Jonathan Prescott-Roy · Kailyn Schmidt · Aran Nayebi · Daniel Bear · Daniel Yamins · James J DiCarlo -
2019 Poster: ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization »
Xiangyi Chen · Sijia Liu · Kaidi Xu · Xingguo Li · Xue Lin · Mingyi Hong · David Cox -
2018 : Lunch & Posters »
Haytham Fayek · German Parisi · Brian Xu · Pramod Kaushik Mudrakarta · Sophie Cerf · Sarah Wassermann · Davit Soselia · Rahaf Aljundi · Mohamed Elhoseiny · Frantzeska Lavda · Kevin J Liang · Arslan Chaudhry · Sanmit Narvekar · Vincenzo Lomonaco · Wesley Chung · Michael Chang · Ying Zhao · Zsolt Kira · Pouya Bashivan · Banafsheh Rafiee · Oleksiy Ostapenko · Andrew Jones · Christos Kaplanis · Sinan Kalkan · Dan Teng · Xu He · Vincent Liu · Somjit Nath · Sungsoo Ahn · Ting Chen · Shenyang Huang · Yash Chandak · Nathan Sprague · Martin Schrimpf · Tony Kendall · Jonathan Richard Schwarz · Michael Li · Yunshu Du · Yen-Chang Hsu · Samira Abnar · Bo Wang -
2018 Poster: Task-Driven Convolutional Recurrent Models of the Visual System »
Aran Nayebi · Daniel Bear · Jonas Kubilius · Kohitij Kar · Surya Ganguli · David Sussillo · James J DiCarlo · Daniel Yamins -
2017 : Panel on "What neural systems can teach us about building better machine learning systems" »
Timothy Lillicrap · James J DiCarlo · Christopher Rozell · Viren Jain · Nathan Kutz · William Gray Roncal · Bingni Brunton -
2017 : Can brain data be used to reverse engineer the algorithms of human perception? »
James J DiCarlo -
2013 Poster: Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream »
Daniel L Yamins · Ha Hong · Charles Cadieu · James J DiCarlo -
2013 Tutorial: Mechanisms Underlying Visual Object Recognition: Humans vs. Neurons vs. Machines »
James J DiCarlo