Timezone: »
Recent advances in eXplainable Artificial Intelligence have enabled Artificial Intelligence (AI) systems to describe their thought process to human users. Also, given the high performance of AI on i.i.d, test sets, it is interesting to study whether such AIs can work alongside humans and improve the accuracy of user decisions. We conduct a user study on 320 lay and 11 expert users to understand on the effectiveness of state-of-the-art attribution methods in assisting humans in ImageNet classification, Stanford Dogs fine-grained classification, and these two tasks but when the input image contains adversarial perturbations. We found that, overall, feature attribution is surprisingly not more effective than showing humans nearest training-set examples. On a hard task of fine-grained dog classification, presenting attribution maps to humans does not help, but instead hurts the performance of human-AI teams compared to AI alone. Our findings encourage the community to rigorously test their methods on downstream human-in-the-loop applications and to rethink the existing evaluation metrics.
Author Information
Giang Nguyen (KAIST, South Korea)
Anh Nguyen (Auburn University)
More from the Same Authors
-
2022 Poster: Visual correspondence-based explanations improve AI robustness and human-AI team accuracy »
Mohammad Reza Taesiri · Giang Nguyen · Anh Nguyen -
2021 Poster: The effectiveness of feature attribution methods and its correlation with automatic evaluation scores »
Giang Nguyen · Daeyoung Kim · Anh Nguyen -
2017 : Invited Talk 4 »
Anh Nguyen -
2016 Demonstration: Adventures with Deep Generator Networks »
Jason Yosinski · Anh Nguyen · Jeff Clune · Douglas K Bemis -
2016 Poster: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks »
Anh Nguyen · Alexey Dosovitskiy · Jason Yosinski · Thomas Brox · Jeff Clune