Timezone: »

 
Workshop
eXplainable AI approaches for debugging and diagnosis
Roberto Capobianco · Biagio La Rosa · Leilani Gilpin · Wen Sun · Alice Xiang · Alexander Feldman

Tue Dec 14 05:00 AM -- 02:30 PM (PST) @
Event URL: https://xai4debugging.github.io/ »

Recently, artificial intelligence (AI) has seen the explosion of deep learning (DL) models, which are able to reach super-human performance in several tasks. These improvements, however, come at a cost: DL models are ``black boxes’’, where one feeds an input and obtains an output without understanding the motivations behind that prediction or decision. The eXplainable AI (XAI) field tries to address such problems by proposing methods that explain the behavior of these networks.
In this workshop, we narrow the XAI focus to the specific case in which developers or researchers need to debug their models and diagnose system behaviors. This type of user typically has substantial knowledge about the models themselves but needs to validate, debug, and improve them.

This is an important topic for several reasons. For example, domains like healthcare and justice require that experts are able to validate DL models before deployment. Despite this, the development of novel deep learning models is dominated by trial-and-error phases guided by aggregated metrics and old benchmarks that tell us very little about the skills and utility of these models. Moreover, the debugging phase is a nightmare for practitioners too.

Another community that is working on tracking and debugging machine learning models is the visual analytics one, which proposes systems that help users to understand and interact with machine learning models. In the last years, the usage of methodologies that explain DL models became central in these systems. As a result, the interaction between the XAI and visual analytics communities became more and more important.

The workshop aims at advancing the discourse by collecting novel methods and discussing challenges, issues, and goals around the usage of XAI approaches to debug and improve current deep learning models. In order to achieve this goal, the workshop aims at bringing researchers and practitioners from both fields, strengthening their collaboration.

Join our Slack channel for Live and Offline Q/A with authors and presenters!

Author Information

Roberto Capobianco (Sapienza University of Rome & Sony AI)
Biagio La Rosa (Sapienza University of Rome)
Leilani Gilpin (UC Santa Cruz)

I'm an Assistant Professor in the Department of Computer Science and Engineering at UC Santa Cruz. My research focuses on the design and analysis of methods for autonomous systems to explain themselves. Before returning to academia, I worked at Sony AI on the GT Sophy Project. I received a Ph.D. in EEECS from MIT, an M.S. student in Computational and Mathematical Engineering at Stanford University, and a B.S. in Computer Science with Highest Honors, a B.S. in Mathematics with Honors, and a Music Minor at UC San Diego.

Wen Sun (Cornell University)
Alice Xiang (Sony AI)
Alexander Feldman (Xerox PARC)

More from the Same Authors