Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Gaze Meets ML

Leveraging Multi-Modal Saliency and Fusion for Gaze Target Detection

Athul Mathew · Arshad Ali Khan · Thariq Khalid · Faroq AL-Tam · Riad Souissi

Keywords: [ depth map ] [ 3D gaze ] [ gaze target detection ] [ Fusion ] [ Multi-Modal ] [ saliency ] [ Point cloud ] [ free-viewing ] [ gaze-following ] [ 3D projection ]

[ ] [ Project Page ]
Sat 16 Dec 9:45 a.m. PST — 11:30 a.m. PST

Abstract: Gaze target detection (GTD) is the task of predicting where a person in an image is looking. This is a challenging task, as it requires the ability to understand the relationship between the person's head, body, and eyes, as well as the surrounding environment. In this paper, we propose a novel method for GTD that fuses multiple pieces of information extracted from an image. First, we project the 2D image into a 3D representation using monocular depth estimation. We then extract a depth-infused saliency module map, which highlights the most salient ($\textit{attention-grabbing}$) regions in image for the subject in consideration. We also extract face and depth modalities from the image, and finally fuse all the extracted modalities to identify the gaze target. We quantitatively evaluated our method, including the ablation analysis on three publicly available datasets, namely VideoAttentionTarget, GazeFollow and GOO-Real, and showed that it outperforms other state-of-the-art methods. This suggests that our method is a promising new approach for GTD.

Chat is not available.