Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Tue Dec 14 05:00 AM -- 02:30 PM (PST)
eXplainable AI approaches for debugging and diagnosis
Roberto Capobianco · Biagio La Rosa · Leilani Gilpin · Wen Sun · Alice Xiang · Alexander Feldman





Workshop Home Page

Recently, artificial intelligence (AI) has seen the explosion of deep learning (DL) models, which are able to reach super-human performance in several tasks. These improvements, however, come at a cost: DL models are ``black boxes’’, where one feeds an input and obtains an output without understanding the motivations behind that prediction or decision. The eXplainable AI (XAI) field tries to address such problems by proposing methods that explain the behavior of these networks.
In this workshop, we narrow the XAI focus to the specific case in which developers or researchers need to debug their models and diagnose system behaviors. This type of user typically has substantial knowledge about the models themselves but needs to validate, debug, and improve them.

This is an important topic for several reasons. For example, domains like healthcare and justice require that experts are able to validate DL models before deployment. Despite this, the development of novel deep learning models is dominated by trial-and-error phases guided by aggregated metrics and old benchmarks that tell us very little about the skills and utility of these models. Moreover, the debugging phase is a nightmare for practitioners too.

Another community that is working on tracking and debugging machine learning models is the visual analytics one, which proposes systems that help users to understand and interact with machine learning models. In the last years, the usage of methodologies that explain DL models became central in these systems. As a result, the interaction between the XAI and visual analytics communities became more and more important.

The workshop aims at advancing the discourse by collecting novel methods and discussing challenges, issues, and goals around the usage of XAI approaches to debug and improve current deep learning models. In order to achieve this goal, the workshop aims at bringing researchers and practitioners from both fields, strengthening their collaboration.

Join our Slack channel for Live and Offline Q/A with authors and presenters!

Welcome (Opening)
Speaker Introduction (Introduction)
[IT1] Visual Analytics for Explainable Machine Learning (Invited Talk)
Q/A Session (Live Q/A)
Speaker Introduction (Introduction)
[O1] Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation (Oral)
Q/A Session (Live Q/A)
Break (10min) (Break)
Speaker Introduction (Introduction)
[IT2] Explainability and robustness: Towards trustworthy AI (Invited Talk)
Q/A Session (Live Q/A)
Speaker Introduction (Introduction)
[O2] Not too close and not too far: enforcing monotonicity requires penalizing the right points (Oral)
Q/A Session (Live Q/A)
Break (10min) (Break)
Speaker Introduction (Introduction)
[G] Empowering Human Translators via Interpretable Interactive Neural Machine Translation (A glimpse of the future Track)
Q/A Session (Live Q/A)
Speaker Introduction (Introduction)
[O3] Reinforcement Explanation Learning (Oral)
Q/A Session (Live Q/A)
Spotlight Introduction (Introduction)
[S1] Interpreting BERT architecture predictions for peptide presentation by MHC class I proteins (Spotlight)
[S2] XC: Exploring Quantitative Use Cases for Explanations in 3D Object Detection (Spotlight)
[S3] Interpretability in Gated Modular Neural Networks (Spotlight)
[S4] A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines (Spotlight)
[S5] Debugging the Internals of Convolutional Networks (Spotlight)
[S6] Defuse: Training More Robust Models through Creation and Correction of Novel Model Errors (Spotlight)
[S7] DeDUCE: Generating Counterfactual Explanations At Scale (Spotlight)
Break (10min) (Break)
Speaker Introduction (Introduction)
[IT3] Towards Reliable and Robust Model Explanations (Invited Talk)
Q/A Session (Live Q/A)
Speaker Introduction (Introduction)
[O4] Are All Neurons Created Equal? Interpreting and Controlling BERT through Individual Neurons (Oral)
Q/A Session (Live Q/A)
Break (12min) (Break)
Speaker Introduction (Introduction)
[IT4] Detecting model reliance on spurious signals is challenging for post hoc explanation approaches (Invited Talk)
Q/A Session (Live Q/A)
Speaker Introduction (Introduction)
[O5] Do Feature Attribution Methods Correctly Attribute Features? (Oral)
Q/A Session (Live Q/A)
Break (15min) (Break)
Speaker Introduction (Introduction)
[O6] Explaining Information Flow Inside Vision Transformers Using Markov Chain (Oral)
Q/A Session (Live Q/A)
Speaker Introduction (Introduction)
[IT5] Natural language descriptions of deep features (Invited Talk)
Q/A Session
Spotlight Introduction (Introduction)
[S8] Fast TreeSHAP: Accelerating SHAP Value Computation for Trees (Spotlight)
[S9] Simulated User Studies for Explanation Evaluation (Spotlight)
[S10] Exploring XAI for the Arts: Explaining Latent Space in Generative Music (Spotlight)
[S11] Interpreting Language Models Through Knowledge Graph Extraction (Spotlight)
[S12] Efficient Decompositional Rule Extraction for Deep Neural Networks (Spotlight)
[S13] Revisiting Sanity Checks for Saliency Maps (Spotlight)
[S14] Towards Better Visual Explanations for Deep ImageClassifiers (Spotlight)
Closing Remarks (Closing)
Poster Session (Link)
Fast TreeSHAP: Accelerating SHAP Value Computation for Trees (Poster)
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines (Poster)
Interpreting Language Models Through Knowledge Graph Extraction (Poster)
Simulated User Studies for Explanation Evaluation (Poster)
Efficient Decompositional Rule Extraction for Deep Neural Networks (Poster)
Revisiting Sanity Checks for Saliency Maps (Poster)
DeDUCE: Generating Counterfactual Explanations At Scale (Poster)
Debugging the Internals of Convolutional Networks (Poster)
Defuse: Training More Robust Models through Creation and Correction of Novel Model Errors (Poster)
Our Slack Channel for Q/A, social and networking (Link)
XC: Exploring Quantitative Use Cases for Explanations in 3D Object Detection (Poster)
Towards Better Visual Explanations for Deep ImageClassifiers (Poster)
Interpretability in Gated Modular Neural Networks (Poster)
Exploring XAI for the Arts: Explaining Latent Space in Generative Music (Poster)
Interpreting BERT architecture predictions for peptide presentation by MHC class I proteins (Poster)