Timezone: »

 
Oral
Passive attention in artificial neural networks predicts human visual selectivity
Thomas Langlois · Haicheng Zhao · Erin Grant · Ishita Dasgupta · Tom Griffiths · Nori Jacoby

Fri Dec 10 12:00 AM -- 12:15 AM (PST) @

Developments in machine learning interpretability techniques over the past decade have provided new tools to observe the image regions that are most informative for classification and localization in artificial neural networks (ANNs). Are the same regions similarly informative to human observers? Using data from 79 new experiments and 7,810 participants, we show that passive attention techniques reveal a significant overlap with human visual selectivity estimates derived from 6 distinct behavioral tasks including visual discrimination, spatial localization, recognizability, free-viewing, cued-object search, and saliency search fixations. We find that input visualizations derived from relatively simple ANN architectures probed using guided backpropagation methods are the best predictors of a shared component in the joint variability of the human measures. We validate these correlational results with causal manipulations using recognition experiments. We show that images masked with ANN attention maps were easier for humans to classify than control masks in a speeded recognition experiment. Similarly, we find that recognition performance in the same ANN models was likewise influenced by masking input images using human visual selectivity maps. This work contributes a new approach to evaluating the biological and psychological validity of leading ANNs as models of human vision: by examining their similarities and differences in terms of their visual selectivity to the information contained in images.

Author Information

Thomas Langlois (Princeton University)
Haicheng Zhao (Princeton University)
Erin Grant (UC Berkeley)
Ishita Dasgupta (Harvard University)
Tom Griffiths (Princeton University)
Nori Jacoby (Max Planck Institute for Empirical Aesthetics)

I'm interested in exploring the role of culture in auditory perception, using iterated learning alongside classical psychophysical methods to characterize perceptual biases in music and speech rhythms in populations around the world. Other work has focused on the mathematical modeling of sensorimotor synchronization in the form of tapping experiments as well as the application of machine-learning techniques to model aspects of musical syntax, including tonal harmony, birdsong, and the perception of musical form. I am currently a Research Group Leader at the Max Planck Institute for Empirical Aesthetics in Frankfurt, where I direct the "Computational Auditory Perception" group. Previously, I was a Presidential Scholar In Society And Neuroscience at Columbia University, a postdoc at the McDermott Computational Audition Lab at MIT, and a visiting postdoctoral researcher in Tom Griffiths's Computational Cognitive Science Lab at Berkeley. I completed my Ph.D. at the Edmond and Lily Safra Center for Brain Sciences (ELSC) at the Hebrew University of Jerusalem under the supervision of Naftali Tishby and Merav Ahissar, and hold a M.A. in mathematics from the same institution. My research has been published in journals including Current Biology, Science, Nature, Nature Scientific Reports, Philosophical Transactions B, Journal of Neuroscience, Journal of Vision, and Psychonomic Bulletin and Review.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors