Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Gaze meets ML

Comparing radiologists' gaze and saliency maps generated by interpretability methods for chest x-rays

Ricardo Bigolin Lanfredi · Ambuj Arora · Trafton Drew · Joyce Schroeder · Tolga Tasdizen

Keywords: [ eye tracking ] [ saliency maps ] [ Chest X-rays ] [ Radiology ] [ gaze ]


Abstract:

We use a dataset of eye-tracking data from five radiologists to compare the regions used by deep learning models for their decisions and the heatmaps representing where radiologists looked. We conduct a class-independent analysis of the saliency maps generated by two methods selected from the literature: Grad-CAM and attention maps from an attention-gated model. For the comparison, we use shuffled metrics, avoiding biases from fixation locations. We achieve scores comparable to an interobserver baseline in one metric, highlighting the potential of saliency maps from Grad-CAM to mimic a radiologist's attention over an image. We also divide the dataset into subsets to evaluate in which cases similarities are higher.

Chat is not available.