Timezone: »

Towards robust vision by multi-task learning on monkey visual cortex
Shahd Safarani · Arne Nix · Konstantin Willeke · Santiago Cadena · Kelli Restivo · George Denfield · Andreas Tolias · Fabian Sinz

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @ None #None

Deep neural networks set the state-of-the-art across many tasks in computer vision, but their generalization ability to simple image distortions is surprisingly fragile. In contrast, the mammalian visual system is robust to a wide range of perturbations. Recent work suggests that this generalization ability can be explained by useful inductive biases encoded in the representations of visual stimuli throughout the visual cortex. Here, we successfully leveraged these inductive biases with a multi-task learning approach: we jointly trained a deep network to perform image classification and to predict neural activity in macaque primary visual cortex (V1) in response to the same natural stimuli. We measured the out-of-distribution generalization abilities of our resulting network by testing its robustness to common image distortions. We found that co-training on monkey V1 data indeed leads to increased robustness despite the absence of those distortions during training. Additionally, we showed that our network's robustness is often very close to that of an Oracle network where parts of the architecture are directly trained on noisy images. Our results also demonstrated that the network's representations become more brain-like as their robustness improves. Using a novel constrained reconstruction analysis, we investigated what makes our brain-regularized network more robust. We found that our monkey co-trained network is more sensitive to content than noise when compared to a Baseline network that we trained for image classification alone. Using DeepGaze-predicted saliency maps for ImageNet images, we found that the monkey co-trained network tends to be more sensitive to salient regions in a scene, reminiscent of existing theories on the role of V1 in the detection of object borders and bottom-up saliency. Overall, our work expands the promising research avenue of transferring inductive biases from biological to artificial neural networks on the representational level, and provides a novel analysis of the effects of our transfer.

Author Information

Shahd Safarani (University of Tuebingen)
Arne Nix (University of Tübingen)
Konstantin Willeke (University of Tuebingen)
Santiago Cadena (University of Tübingen)
Kelli Restivo (Baylor College of Medicine)
George Denfield (Baylor College of Medicine)
Andreas Tolias (Baylor College of Medicine)
Fabian Sinz (University Tübingen)

More from the Same Authors

  • 2021 Spotlight: A flow-based latent state generative model of neural population responses to natural images »
    Mohammad Bashiri · Edgar Walker · Konstantin-Klemens Lurz · Akshay Jagadish · Taliah Muhammad · Zhiwei Ding · Zhuokun Ding · Andreas Tolias · Fabian Sinz
  • 2021 Poster: A flow-based latent state generative model of neural population responses to natural images »
    Mohammad Bashiri · Edgar Walker · Konstantin-Klemens Lurz · Akshay Jagadish · Taliah Muhammad · Zhiwei Ding · Zhuokun Ding · Andreas Tolias · Fabian Sinz
  • 2020 Poster: Factorized Neural Processes for Neural Processes: K-Shot Prediction of Neural Responses »
    Ronald (James) Cotton · Fabian Sinz · Andreas Tolias
  • 2019 : Poster Session »
    Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar
  • 2019 Poster: Learning from brains how to regularize machines »
    Zhe Li · Wieland Brendel · Edgar Walker · Erick Cobos · Taliah Muhammad · Jacob Reimer · Matthias Bethge · Fabian Sinz · Xaq Pitkow · Andreas Tolias
  • 2018 Poster: Stimulus domain transfer in recurrent models for large scale cortical population prediction on video »
    Fabian Sinz · Alexander Ecker · Paul Fahey · Edgar Walker · Erick M Cobos · Emmanouil Froudarakis · Dimitri Yatsenko · Xaq Pitkow · Jacob Reimer · Andreas Tolias
  • 2016 : From Brains to Bits and Back Again »
    Yoshua Bengio · Terrence Sejnowski · Christos H Papadimitriou · Jakob H Macke · Demis Hassabis · Alyson Fletcher · Andreas Tolias · Jascha Sohl-Dickstein · Konrad P Koerding
  • 2015 : Methods overview: Studying the function and structure of microcircuits »
    Andreas Tolias