Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Gaze Meets ML

Exploring Foveation and Saccade for Improved Weakly-Supervised Localization

Timur Ibrayev · Manish Nagaraj · Amitangshu Mukherjee · Kaushik Roy

Keywords: [ Object Detection ] [ object localization ] [ weakly supervised learning ] [ saccades ] [ neuro-inspired algorithms ] [ optical illusions ] [ active vision ] [ foveation ] [ Deep Learning ]

[ ] [ Project Page ]
Sat 16 Dec 9:45 a.m. PST — 11:30 a.m. PST

Abstract:

Deep neural networks have become the de facto choice as feature extraction engines, ubiquitously used for computer vision tasks. The current approach is to process every input with uniform resolution in a one-shot manner and make all of the predictions at once. However, human vision is an "active" process that not only actively switches from one focus point to another within the visual field, but also applies spatially varying attention centered at such focus points. To bridge the gap, we propose incorporating the bio-plausible mechanisms of foveation and saccades to build an active object localization framework. While foveation enables it to process different regions of the input with variable degrees of detail, saccades allow it to change the focus point of such foveated regions. Our experiments show that these mechanisms improve the quality of predicted bounding boxes by capturing all the essential object parts while minimizing unnecessary background clutter. Additionally, they enable the resiliency of the method by allowing it to detect multiple objects while being trained only on data containing a single object per image. Finally, using the interesting "duck-rabbit" optical illusion, we show that our method manifests human-like behavior.

Chat is not available.