Timezone: »

Workshop
eXplainable AI approaches for debugging and diagnosis
Roberto Capobianco · Biagio La Rosa · Leilani Gilpin · Wen Sun · Alice Xiang · Alexander Feldman

Tue Dec 14 05:00 AM -- 02:30 PM (PST) @ None

Recently, artificial intelligence (AI) has seen the explosion of deep learning (DL) models, which are able to reach super-human performance in several tasks. These improvements, however, come at a cost: DL models are `black boxes’’, where one feeds an input and obtains an output without understanding the motivations behind that prediction or decision. The eXplainable AI (XAI) field tries to address such problems by proposing methods that explain the behavior of these networks.
In this workshop, we narrow the XAI focus to the specific case in which developers or researchers need to debug their models and diagnose system behaviors. This type of user typically has substantial knowledge about the models themselves but needs to validate, debug, and improve them.

This is an important topic for several reasons. For example, domains like healthcare and justice require that experts are able to validate DL models before deployment. Despite this, the development of novel deep learning models is dominated by trial-and-error phases guided by aggregated metrics and old benchmarks that tell us very little about the skills and utility of these models. Moreover, the debugging phase is a nightmare for practitioners too.

Another community that is working on tracking and debugging machine learning models is the visual analytics one, which proposes systems that help users to understand and interact with machine learning models. In the last years, the usage of methodologies that explain DL models became central in these systems. As a result, the interaction between the XAI and visual analytics communities became more and more important.

The workshop aims at advancing the discourse by collecting novel methods and discussing challenges, issues, and goals around the usage of XAI approaches to debug and improve current deep learning models. In order to achieve this goal, the workshop aims at bringing researchers and practitioners from both fields, strengthening their collaboration.

Join our Slack channel for Live and Offline Q/A with authors and presenters!

#### Author Information

##### Leilani Gilpin (UC Santa Cruz)

I'm a PhD Student in the Department of Electrical Engineering and Computer Science (EECS-Course 6) and the Artificial Intelligence Lab (CSAIL) at MIT working under the supervision of Professor Gerald Jay Sussman. My research is in the area of Artificial Intelligence, where I am working to help autonomous vehicles (and other autonomous machines) to explain themselves. Before returning to academia, I worked at Palo Alto Research Center as a Member of Technical Staff where I worked on anomaly detection in healthcare. I received an M.S. student in Computational and Mathematical Engineering at Stanford University, and a B.S. in Computer Science with Highest Honors, a B.S. in Mathematics with Honors, and a Music Minor at UC San Diego.