Skip to yearly menu bar Skip to main content


Keynote
in
Workshop: Gaze meets ML

Use of Machine Learning and Gaze Tracking to Predict Radiologists’ Decisions in Breast Cancer Detection

Claudia Mello-Thoms


Abstract:

Breast cancer is the most common cancer for women worldwide. In 2020 the GLOBOCAN estimated that 2,261,419 new breast cancer cases were diagnosed around the world, which corresponds to 11.7% of all cancers diagnosed. Moreover, incidence of this disease has been slowly rising in the US by about 0.5% per year since the mid-2000s. The most commonly used imaging modality to screen for breast cancer is digital mammography, but it has low sensitivity (particularly in dense breasts) and a relative high number of False Positives. Perhaps because of these, historically there has been much interest in developing computer-assisted tools to aid radiologists in the task of detecting early cancerous lesions. In 2022, breast imaging is a major area of interest for the developers of Artificial Intelligence (AI), and applications to detect breast cancer account for 14% of all AI applications on the medical imaging market.
In my research I have taken a different approach to the path traditionally followed by AI applications. Instead of looking at the image to detect cancer, I decided to analyze the radiologist that is reading the image, and predict the radiologist’s decisions in areas where he/she marks the location of a cancerous lesion (True and False Positives), and in areas that are fixated but do not elicit a mark (True and False Negatives). To carry out these analyses, I used eye position recording to determine what areas of the image attracted visual attention. I have shown that radiologists are consistent in the errors that they make (FPs and FNs), and that a machine learning classifier can predict these errors with good accuracy. Recently we have developed a system that not only predicts the radiologist’s decisions but also offers feedback, seeking to help to correct the errors.

Chat is not available.