Workshop: Gaze meets ML
Generating Attention Maps from Eye-gaze for the Diagnosis of Alzheimer's Disease
Carlos Antunes · Margarida Silveira
Convolutional neural networks (CNNs), are currently the best computational methods for the diagnosis of Alzheimer’s disease (AD) from neuroimaging. CNNs are able to automatically learn a hierarchy of spatial features, but they are not optimized to incorporate domain knowledge.In this work we study the generation of attention maps based on a human expert gaze of the brain scans (domain knowledge) to guide the deep model to focus on the more relevant regions for AD diagnosis. Two strategies to generate the maps from eye-gaze were investigated; the use of average class maps and supervising a network to generate the attention maps. These approaches were compared with masking (hard attention) with regions of interest (ROI) and CNN’s with traditional attention mechanisms.For our experiments, we used positron emission tomography (PET) scans from the Alzheimer’s Disease Neuroimaging initiative (ADNI) database. For the task of normal control (NC) vs Alzheimer’s (AD), the best performing model was with insertion of regions of interest (ROI), which achieved 95.6% accuracy, 0.4% higher than the baseline CNN.