Contributed Talk
in
Workshop: Workshop on Ethical, Social and Governance Issues in AI
Explaining Explanations to Society
There is a disconnect between explanatory artificial intelligence (XAI) methods for deep neural networks and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don’t explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we explore the types of questions that explanatory deep neural network (DNN) systems can answer and discuss challenges inherent in building explanatory systems that provide outside explanations of systems for societal requirements and benefit.
Live content is unavailable. Log in and register to view live content