Skip to yearly menu bar Skip to main content


Awarded Paper Presentation
in
Workshop: Progress and Challenges in Building Trustworthy Embodied AI

To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles

Yuan Shen · Shanduojiao Jiang · Yanlin Chen · Katherine Driggs-Campbell

Keywords: [ explanation AI ] [ advanced driver assistance system ] [ autonomous vehicle ] [ explanation necessity ]


Abstract:

Explainable AI, in the context of autonomous systems, like self-driving cars, has drawn broad interests from researchers. Recent studies have found that providing explanations for autonomous vehicles' actions has many benefits (e.g., increased trust and acceptance), but put little emphasis on when an explanation is needed and how the content of explanation changes with driving context. In this work, we investigate which scenarios people need explanations and how the critical degree of explanation shifts with situations and driver types. Through a user experiment, we ask participants to evaluate how necessary an explanation is and measure the impact on their trust in self-driving cars in different contexts. Moreover, we present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary. In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.

Chat is not available.