Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Attributing Model Behavior at Scale (ATTRIB)

Object Detection in Deep Neural Networks Differs from Humans in the Periphery

Anne Harrington · Vasha DuTell · Mark Hamilton · Ayush Tewari · Simon Stent · Bill Freeman · Ruth Rosenholtz


Abstract:

To understand how strategies used by object detection models compare to those in human vision, we simulate peripheral vision in object detection models at the input stage. We collect human data on object change detection in the periphery and compare it to detection models with a simulated periphery. We find that unlike humans, models are highly sensitive to the texture-like transformation in peripheral vision. Not only do models under-perform compared to humans, they do not follow the same clutter effects as humans even when fixing the model task to closely mimic the human one. Training on peripheral input boosts performance on the change detection task, but appears to aid object localization in the periphery much more than object identification. This suggests that human-like performance is not attributable to input data alone, and to fully address the differences we see in human and model detection, farther downstream changes may be necessary. In the future, improving alignment between object detection models and human representations could help us build models with more human-explainable detection strategies.

Chat is not available.