Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Black in AI

Impact of Feedback Type on Explanatory Interactive Learning

Misgina Tsighe Hagos · Kathleen Curran · Brian Mac Namee

Keywords: [ Transparency in AI ]


Abstract:

Explanatory Interactive Learning (XIL) collects user feedback on model explanations to implement a Human-in-the-Loop (HITL) based interactive learning scenario. Although XIL has been used to improve classification performance in multiple domains, the impact of different user feedback types on model performance and explanation accuracy is not well studied. To guide future XIL work we compare the effectiveness of two different user feedback types in image classification tasks: (1) instructing an algorithm to ignore certain spurious image features, and (2) instructing an algorithm to focus on certain valid image features. We show that identifying and annotating spurious image features that a model finds salient results in superior classification and explanation accuracy than user feedback that tells a model to focus on valid image features.

Chat is not available.