Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Thu Dec 08 11:00 PM -- 09:30 AM (PST) @ AC Barcelona, Sagrada Familia
Interpretable Machine Learning for Complex Systems
Andrew Wilson · Been Kim · William Herlands





Workshop Home Page

Complex machine learning models, such as deep neural networks, have recently achieved great predictive successes for visual object recognition, speech perception, language modelling, and information retrieval. These predictive successes are enabled by automatically learning expressive features from the data. Typically, these learned features are a priori unknown, difficult to engineer by hand, and hard to interpret. This workshop is about interpreting the structure and predictions of these complex models.

Interpreting the learned features and the outputs of complex systems allows us to more fundamentally understand our data and predictions, and to build more effective models. For example, we may build a complex model to predict long range crime activity. But by interpreting the learned structure of the model, we can gain new insights into the processing driving crime events, enabling us to develop more effective public policy. Moreover, if we learn, for example, that the model is making good predictions by discovering how the geometry of clusters of crime events affect future activity, we can use this knowledge to design even more successful predictive models.

This 1 day workshop is focused on interpretable methods for machine learning, with an emphasis on the ability to learn structure which provides new fundamental insights into the data, in addition to accurate predictions. We will consider a wide range of topics, including deep learning, kernel methods, tensor methods, generalized additive models, rule based models, symbolic regression, visual analytics, and causality. A poster session, coffee breaks, and a panel guided discussion will encourage interaction between attendees. We wish to carefully review and enumerate modern approaches to the challenges of interpretability, share insights into the underlying properties of popular machine learning algorithms, and discuss future directions.

Opening Remarks
Honglak Lee (Invited Talk)
Why Interpretability: A Taxonomy of Interpretability and Implications for Principled Evaluation (Finale Doshi-Velez) (Invited Talk)
Best paper award talks (Contributed Talk)
Intelligible Machine Learning for HealthCare (Rich Caruana) (Invited Talk)
Maya Gupta (Invited Talk)
The Power of Monotonicity ​for Practical​ Machine Learning (Maya Gupta) (Invited Talk)
Finding interpretable sparse structure in fMRI data with dependent relevance determination priors (Jonathan Pillow) (Invited Talk)
Poster session (Posters)
Better Machine Learning Through Data (Saleema Amershi) (Invited Talk)
Future Directions in Interpretable Machine Learning (Panel Discussion)