Program Highlights »
Fri Dec 8th 08:00 AM -- 06:30 PM @ 204
Transparent and interpretable Machine Learning in Safety Critical Environments
Alessandra Tosi · Alfredo Vellido · Mauricio A. Álvarez

Workshop Home Page

The use of machine learning has become pervasive in our society, from specialized scientific data analysis to industry intelligence and practical applications with a direct impact in the public domain. This impact involves different social issues including privacy, ethics, liability and accountability. This workshop aims to discuss the use of machine learning in safety critical environments, with special emphasis on three main application domains:
- Healthcare
- Autonomous systems
- Complainants and liability in data driven industries
We aim to answer some of these questions: How do we make our models more comprehensible and transparent? Shall we always trust our decision making process? How do we involve field experts in the process of making machine learning pipelines more practically interpretable from the viewpoint of the application domain?

08:50 AM Opening remarks (Talk)
Alessandra Tosi, Alfredo Vellido, Mauricio A. Álvarez
09:10 AM Invited talk: Is interpretability and explainability enough for safe and reliable decision making? (Talk)
Suchi Saria
09:45 AM Invited talk: The Role of Explanation in Holding AIs Accountable (Talk)
Finale Doshi-Velez
10:20 AM Contributed talk: Beyond Sparsity: Tree-based Regularization of Deep Models for Interpretability (Talk)
Mike Wu, Sonali Parbhoo, Finale Doshi-Velez
10:30 AM Coffe break 1 (Break)
11:00 AM Invited talk: Challenges for Transparency (Talk)
Adrian Weller
11:20 AM Contributed talk: Safe Policy Search with Gaussian Process Models (Talk)
Kyriakos Polymenakos, Stephen J Roberts
11:30 AM Poster spotlights (Spotlight)
Hiroshi Kuwajima, Masayuki Tanaka, Qingkai Liang, Matthieu Komorowski, Fanyu Que, Thalita F Drumond, Aniruddh Raghu, Leo Anthony Celi, Christina Göpfert, Andrew Ross, Sarah Tan, Rich Caruana, Yin Lou, Devinder Kumar, Graham W Taylor, Forough Poursabzi-Sangdeh, Jenn Wortman Vaughan, Hanna Wallach
12:00 PM Poster session part I (Poster session)
12:30 PM Lunch break (Break)
02:00 PM Invited talk: When the classifier doesn't know: optimum reject options for classification. (Talk)
Barbara Hammer
02:35 PM Contributed talk: Predict Responsibly: Increasing Fairness by Learning To Defer Abstract (Talk)
David Madras, Richard Zemel, Toni Pitassi
02:45 PM Contributed talk: Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks (Talk)
Jack Lanchantin, Ritambhara Singh, Beilun Wang
02:55 PM Best paper prize announcement (Announcement)
03:00 PM Coffe break and Poster session part II (Break)
03:30 PM Invited talk: Robot Transparency as Optimal Control (Talk)
Anca Dragan
04:05 PM Invited talk 6 (Talk)
Dario Amodei
04:40 PM Panel discussion (Discussion panel)
05:20 PM Final remarks (Talk)
Alessandra Tosi, Alfredo Vellido, Mauricio A. Álvarez
05:30 PM End of workshop (Break)