`

Timezone: »

 
Explainable medical image analysis by leveraging human-interpretable features through mutual information minimization
Erick M Cobos · Thomas Kuestner · Bernhard Schölkopf · Sergios Gatidis

Deep learning models used as computer-assisted diagnosis systems in a medical context achieve high accuracy in numerous tasks; however, explaining their predictions remains challenging. Notably in the medical domain, we aspire to have accurate models that can also provide explanations for their outcomes. In this work we propose a deep learning-based framework for medical image analysis that is inherently explainable while maintaining high prediction accuracy. To this end, we introduce a hybrid approach which uses human-interpretable as well as machine-learned features while minimizing their mutual information. Using images of skin lesions we empirically show that our approach achieves human-level performance while being intrinsically interpretable.

Author Information

Erick M Cobos (Max Planck Institute for Intelligent Systems)
Thomas Kuestner (University Hospital of Tuebingen)
Bernhard Schölkopf (MPI for Intelligent Systems, Tübingen)

Bernhard Scholkopf received degrees in mathematics (London) and physics (Tubingen), and a doctorate in computer science from the Technical University Berlin. He has researched at AT&T Bell Labs, at GMD FIRST, Berlin, at the Australian National University, Canberra, and at Microsoft Research Cambridge (UK). In 2001, he was appointed scientific member of the Max Planck Society and director at the MPI for Biological Cybernetics; in 2010 he founded the Max Planck Institute for Intelligent Systems. For further information, see www.kyb.tuebingen.mpg.de/~bs.

Sergios Gatidis (University of Tübingen)

More from the Same Authors