Timezone: »
There is a growing frustration amongst researchers and developers in Explainable AI (XAI) around the lack of consensus around what is meant by ‘explainability’. Do we need one definition of explainability to rule them all? In this paper, we argue why a singular definition of XAI is neither feasible nor desirable at this stage of XAI’s development. We view XAI through the lenses of Social Construction of Technology (SCOT) to explicate how diverse stakeholders (relevant social groups) have different interpretations (interpretative flexibility) that shape the meaning of XAI. Forcing a standardization (closure) on the pluralistic interpretations too early can stifle innovation and lead to premature conclusions. We share how we can leverage the pluralism to make progress in XAI without having to wait for a definitional consensus.
Author Information
Upol Ehsan (Georgia Tech)
Upol Ehsan cares about people first, technology second. He is a doctoral candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining his expertise in AI and background in Philosophy, his work in Explainable AI (XAI) aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. Putting the human first and focusing on how our values shape the use and abuse of technology, his work has coined the term Human-centered Explainable AI (a sub-field of XAI) and charted its visions. Actively publishing in top peer-reviewed venues like CHI, his work has received multiple awards and been covered in major media outlets. Bridging industry and academia, he serves in multiple program committees in HCI and AI conferences (e.g., DIS, IUI, NeurIPS) and actively connects these communities (e.g, the widely attended HCXAI workshop at CHI). By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is also an advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor.
Mark Riedl (Georgia Institute of Technology)
More from the Same Authors
-
2020 : Weird AI Yankovic: Generating Parody Lyrics »
Mark Riedl -
2021 : Modeling Worlds in Text »
Prithviraj Ammanabrolu · Mark Riedl -
2022 : Q & A »
Cheng-Zhi Anna Huang · Negar Rostamzadeh · Mark Riedl -
2022 Tutorial: Creative Culture and Machine Learning »
Negar Rostamzadeh · Cheng-Zhi Anna Huang · Mark Riedl -
2022 : Tutorial part 1 »
Negar Rostamzadeh · Mark Riedl · Cheng-Zhi Anna Huang -
2022 Poster: Inherently Explainable Reinforcement Learning in Natural Language »
Xiangyu Peng · Mark Riedl · Prithviraj Ammanabrolu -
2021 : Computers, Creativity, and Lovelace »
Mark Riedl -
2021 : XAI:: Explainability Pitfalls: Beyond Dark Patterns in Explainable AI »
Mark Riedl · Upol Ehsan -
2021 Poster: Learning Knowledge Graph-based World Models of Textual Environments »
Prithviraj Ammanabrolu · Mark Riedl -
2020 Workshop: Wordplay: When Language Meets Games »
Prithviraj Ammanabrolu · Matthew Hausknecht · Xingdi Yuan · Marc-Alexandre Côté · Adam Trischler · Kory Mathewson @korymath · John Urbanek · Jason Weston · Mark Riedl