Timezone: »
Poster
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Julien Colin · Thomas FEL · Remi Cadene · Thomas Serre
A multitude of explainability methods has been described to try to help users better understand how modern AI systems make decisions. However, most performance metrics developed to evaluate these methods have remained largely theoretical -- without much consideration for the human end-user. In particular, it is not yet clear (1) how useful current explainability methods are in real-world scenarios; and (2) whether current performance metrics accurately reflect the usefulness of explanation methods for the end user. To fill this gap, we conducted psychophysics experiments at scale ($n=1,150$) to evaluate the usefulness of representative attribution methods in three real-world scenarios. Our results demonstrate that the degree to which individual attribution methods help human participants better understand an AI system varies widely across these scenarios. This suggests the need to move beyond quantitative improvements of current attribution methods, towards the development of complementary approaches that provide qualitatively different sources of information to human end-users.
Author Information
Julien Colin (Brown University, ELLIS Alicante)
Thomas FEL (Brown University)
Remi Cadene (Sorbonne University - LIP6)
Thomas Serre (Brown University)
More from the Same Authors
-
2022 : The emergence of visual simulation in task-optimized recurrent neural networks »
Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Drew Linsley · David Sheinberg · Thomas Serre -
2022 Poster: Meta-Reinforcement Learning with Self-Modifying Networks »
Mathieu Chalvidal · Thomas Serre · Rufin VanRullen -
2022 Poster: A Benchmark for Compositional Visual Reasoning »
Aimen Zerroug · Mohit Vaishnav · Julien Colin · Sebastian Musslick · Thomas Serre -
2022 Poster: Diversity vs. Recognizability: Human-like generalization in one-shot generative models »
Victor Boutin · Lakshya Singhal · Xavier Thomas · Thomas Serre -
2022 Poster: Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure »
Paul Novello · Thomas FEL · David Vigouroux -
2022 Poster: Harmonizing the object recognition strategies of deep neural networks with humans »
Thomas FEL · Ivan F Rodriguez Rodriguez · Drew Linsley · Thomas Serre -
2021 Poster: Tracking Without Re-recognition in Humans and Machines »
Drew Linsley · Girik Malik · Junkyung Kim · Lakshmi Narasimhan Govindarajan · Ennio Mingolla · Thomas Serre -
2021 Poster: Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis »
Thomas FEL · Remi Cadene · Mathieu Chalvidal · Matthieu Cord · David Vigouroux · Thomas Serre -
2020 Poster: Stable and expressive recurrent vision models »
Drew Linsley · Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Rex Liu · Thomas Serre -
2020 Spotlight: Stable and expressive recurrent vision models »
Drew Linsley · Alekh Karkada Ashok · Lakshmi Narasimhan Govindarajan · Rex Liu · Thomas Serre -
2020 Session: Orals & Spotlights Track 29: Neuroscience »
Aasa Feragen · Thomas Serre -
2019 Poster: RUBi: Reducing Unimodal Biases for Visual Question Answering »
Remi Cadene · Corentin Dancette · Hedi Ben younes · Matthieu Cord · Devi Parikh -
2018 Poster: Learning long-range spatial dependencies with horizontal gated recurrent units »
Drew Linsley · Junkyung Kim · Vijay Veerabadran · Charles Windolf · Thomas Serre -
2016 Poster: How Deep is the Feature Analysis underlying Rapid Visual Categorization? »
Sven Eberhardt · Jonah G Cader · Thomas Serre -
2013 Poster: Neural representation of action sequences: how far can a simple snippet-matching model take us? »
Cheston Tan · Jedediah M Singer · Thomas Serre · David Sheinberg · Tomaso Poggio