Skip to yearly menu bar Skip to main content


Poster
in
Workshop: All Things Attention: Bridging Different Perspectives on Attention

Bounded logit attention: Learning to explain image classifiers

Thomas Baumhauer · Djordje Slijepcevic · Matthias Zeppelzauer

Keywords: [ convolutional neural networks ] [ Explainable Artificial Intelligence ] [ image classification ] [ feature selection ] [ beta activation function ] [ self-learned explainability ]


Abstract:

Explainable artificial intelligence is the attempt to elucidate the workings of systems too complex to be directly accessible to human cognition through suitable sideinformation referred to as “explanations”. We present a trainable explanation module for convolutional image classifiers we call bounded logit attention (BLA). The BLA module learns to select a subset of the convolutional feature map for each input instance, which then serves as an explanation for the classifier’s prediction. BLA overcomes several limitations of the instancewise feature selection method “learning to explain” (L2X) introduced by Chen et al. (2018): 1) BLA scales to real-world sized image classification problems, and 2) BLA offers a canonical way to learn explanations of variable size. Due to its modularity BLA lends itself to transfer learning setups and can also be employed as a post-hoc add-on to trained classifiers. Beyond explainability, BLA may serve as a general purpose method for differentiable approximation of subset selection. In a user study we find that BLA explanations are preferred over explanations generated by the popular (Grad-)CAM method (Zhou et al., 2016; Selvaraju et al., 2017).

Chat is not available.