Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Learning Meaningful Representations of Life

Multimodal deep transfer learning for the analysis of optical coherence tomography scans and retinal fundus photographs

Zoi Tsangalidou · Edwin Fong · Josefine Vilsbøll Sundgaard · Trine J Abrahamsen · Kajsa Kvist


Abstract:

Deep learning methods are increasingly applied to ophthalmologic scans in order to diagnose and prognosticate eye diseases, cardiovascular or renal outcomes. In this work, we create a multimodal deep learning model that combines retinal fundus photographs and optical coherence tomography scans and evaluate it in predictive tasks, matching state-of-the-art performance with a smaller dataset. We use saliency maps to showcase which sections of the eye morphology influence the model’s prediction and benchmark the performance of the multimodal model against algorithms that utilize only the individual modalities.

Chat is not available.