Timezone: »

Harmonizing the object recognition strategies of deep neural networks with humans
Thomas FEL · Ivan F Rodriguez Rodriguez · Drew Linsley · Thomas Serre

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #525

The many successes of deep neural networks (DNNs) over the past decade have largely been driven by computational scale rather than insights from biological intelligence. Here, we explore if these trends have also carried concomitant improvements in explaining the visual strategies humans rely on for object recognition. We do this by comparing two related but distinct properties of visual strategies in humans and DNNs: where they believe important visual features are in images and how they use those features to categorize objects. Across 84 different DNNs trained on ImageNet and three independent datasets measuring the where and the how of human visual strategies for object recognition on those images, we find a systematic trade-off between DNN categorization accuracy and alignment with human visual strategies for object recognition. \textit{State-of-the-art DNNs are progressively becoming less aligned with humans as their accuracy improves}. We rectify this growing issue with our neural harmonizer: a general-purpose training routine that both aligns DNN and human visual strategies and improves categorization accuracy. Our work represents the first demonstration that the scaling laws that are guiding the design of DNNs today have also produced worse models of human vision. We release our code and data at https://serre-lab.github.io/Harmonization to help the field build more human-like DNNs.

Author Information

Thomas FEL (Brown University)
Ivan F Rodriguez Rodriguez (Brown University)
Drew Linsley (Brown University)

We need artificial vision to create intelligent machines that can reason about the world, but existing artificial vision systems cannot solve many of the visual challenges that we encounter and routinely solve in our daily lives. I look to biological vision to inspire new solutions to challenges faced by artificial vision. I do this by testing complementary hypotheses that connect computational theory with systems- and cognitive-neuroscience level experimental research: 1. Computational challenges for artificial vision can be identified through systematic comparisons with biological vision, and solved with algorithms inspired by those of biological vision. 2. Improved algorithms for artificial vision will lead to better methods for gleaning insight from large-scale experimental data, and better models for understanding the relationship between neural computation and perception.

Thomas Serre (Brown University)

More from the Same Authors