Timezone: »
The use of machine learning has become pervasive in our society, from specialized scientific data analysis to industry intelligence and practical applications with a direct impact in the public domain. This impact involves different social issues including privacy, ethics, liability and accountability. This workshop aims to discuss the use of machine learning in safety critical environments, with special emphasis on three main application domains:
- Healthcare
- Autonomous systems
- Complainants and liability in data driven industries
We aim to answer some of these questions: How do we make our models more comprehensible and transparent? Shall we always trust our decision making process? How do we involve field experts in the process of making machine learning pipelines more practically interpretable from the viewpoint of the application domain?
Fri 8:50 a.m. - 9:10 a.m.
|
Opening remarks
(
Talk
)
Opening remarks and introduction to the Workshop on Transparent and Interpretable Machine Learning in Safety Critical Environments. |
Alessandra Tosi · Alfredo Vellido · Mauricio Álvarez 🔗 |
Fri 9:10 a.m. - 9:45 a.m.
|
Invited talk: Is interpretability and explainability enough for safe and reliable decision making?
(
Talk
)
|
Suchi Saria 🔗 |
Fri 9:45 a.m. - 10:20 a.m.
|
Invited talk: The Role of Explanation in Holding AIs Accountable
(
Talk
)
As AIs are used in more common and consequential situations, it is important that we find ways to take advantage of our computational capabilities while also holding the creators of these systems accountable. In this talk, I'll start out by sharing some of the challenges associated with deploying AIs in healthcare, and how interpretability or explanation is an essential tool in this domain. Then I'll speak more broadly about the role of explanation in holding AIs accountable under the law (especially in the context of current regulation around AIs). In doing so, I hope to spark discussions about how we, as a machine learning community, believe that our work should be regulated. |
Finale Doshi-Velez 🔗 |
Fri 10:20 a.m. - 10:30 a.m.
|
Contributed talk: Beyond Sparsity: Tree-based Regularization of Deep Models for Interpretability
(
Talk
)
The lack of interpretability remains a key barrier to the adoption of deep models in many healthcare applications. In this work, we explicitly regularize deep models so human users might step through the process behind their predictions in little time. Specifically, we train deep time-series models so their class-probability predictions have high accuracy while being closely modeled by decision trees with few nodes. On two clinical decision-making tasks, we demonstrate that this new tree-based regularization is distinct from simpler L2 or L1 penalties, resulting in more interpretable models without sacrificing predictive power. |
Mike Wu · Sonali Parbhoo · Finale Doshi-Velez 🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Coffe break 1
|
🔗 |
Fri 11:00 a.m. - 11:20 a.m.
|
Invited talk: Challenges for Transparency
(
Talk
)
|
Adrian Weller 🔗 |
Fri 11:20 a.m. - 11:30 a.m.
|
Contributed talk: Safe Policy Search with Gaussian Process Models
(
Talk
)
We propose a method to optimise the parameters of a policy which will be used to safely perform a given task in a data-efficient manner. We train a Gaussian process model to capture the system dynamics, based on the PILCO framework. Our model has useful analytic properties, which allow closed form computation of error gradients and estimating the probability of violating given state space constraints. During training, as well as operation, only policies that are deemed safe are implemented on the real system, minimising the risk of failure. |
Kyriakos Polymenakos · Stephen J Roberts 🔗 |
Fri 11:30 a.m. - 12:00 p.m.
|
Poster spotlights
(
Spotlight
)
[1] "Network Analysis for Explanation" [2] "Using prototypes to improve convolutional networks interpretability" [3] "Accelerated Primal-Dual Policy Optimization for Safe Reinforcement Learning" [4] "Deep Reinforcement Learning for Sepsis Treatment" [5] "Analyzing Feature Relevance for Linear Reject Option SVM using Relevance Intervals" [6] "The Neural LASSO: Local Linear Sparsity for Interpretable Explanations" [7] "Detecting Bias in Black-Box Models Using Transparent Model Distillation" [8] "Data masking for privacy-sensitive learning" [9] "CLEAR-DR: Interpretable Computer Aided Diagnosis of Diabetic Retinopathy" [10] "Manipulating and Measuring Model Interpretability" |
Hiroshi Kuwajima · Masayuki Tanaka · Qingkai Liang · Matthieu Komorowski · Fanyu Que · Thalita F Drumond · Aniruddh Raghu · Leo Anthony Celi · Christina Göpfert · Andrew Ross · Sarah Tan · Rich Caruana · Yin Lou · Devinder Kumar · Graham Taylor · Forough Poursabzi-Sangdeh · Jennifer Wortman Vaughan · Hanna Wallach
|
Fri 12:00 p.m. - 12:30 p.m.
|
Poster session part I
(
Poster session
)
|
🔗 |
Fri 12:30 p.m. - 2:00 p.m.
|
Lunch break
|
🔗 |
Fri 2:00 p.m. - 2:35 p.m.
|
Invited talk: When the classifier doesn't know: optimum reject options for classification.
(
Talk
)
|
Barbara Hammer 🔗 |
Fri 2:35 p.m. - 2:45 p.m.
|
Contributed talk: Predict Responsibly: Increasing Fairness by Learning To Defer Abstract
(
Talk
)
Machine learning systems, which are often used for high-stakes decisions, suffer from two mutually reinforcing problems: unfairness and opaqueness. Many popular models, though generally accurate, cannot express uncertainty about their predictions. Even in regimes where a model is inaccurate, users may trust the model's predictions too fully, and allow its biases to reinforce the user's own. In this work, we explore models that learn to defer. In our scheme, a model learns to classify accurately and fairly, but also to defer if necessary, passing judgment to a downstream decision-maker such as a human user. We further propose a learning algorithm which accounts for potential biases held by decision-makers later in a pipeline. Experiments on real-world datasets demonstrate that learning to defer can make a model not only more accurate but also less biased. Even when operated by biased users, we show that deferring models can still greatly improve the fairness of the entire pipeline. |
David Madras · Richard Zemel · Toni Pitassi 🔗 |
Fri 2:45 p.m. - 3:00 p.m.
|
Contributed talk: Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks
(
Talk
)
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models using three visualization methods: saliency maps, temporal output scores, and class optimization. In addition to providing insights as to how each model makes its prediction, the visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. |
Jack Lanchantin · Ritambhara Singh · Beilun Wang 🔗 |
Fri 2:55 p.m. - 3:00 p.m.
|
Best paper prize announcement
(
Announcement
)
|
🔗 |
Fri 3:00 p.m. - 3:30 p.m.
|
Coffe break and Poster session part II
|
🔗 |
Fri 3:30 p.m. - 4:05 p.m.
|
Invited talk: Robot Transparency as Optimal Control
(
Talk
)
In this talk, we will formalize transparency as acting in a dynamical system or MDP in which we augment the physical state with the human's belief about the robot. We will characterize the dynamics model in this MDP, and show that approximate solutions lead to cars that drive in a way that is easier to anticipate, robots that come up with instructive demonstrations of their task knowledge, manipulator arms that clarify their intent, and navigation robots that clarify their future task plans. Lastly, we will briefly explore robots that express more interesting properties like the their level of confidence in their task, or the weight of an object they are carrying. |
Anca Dragan 🔗 |
Fri 4:05 p.m. - 4:40 p.m.
|
Invited talk 6
(
Talk
)
|
Dario Amodei 🔗 |
Fri 4:40 p.m. - 5:20 p.m.
|
Panel discussion
(
Discussion panel
)
|
🔗 |
Fri 5:20 p.m. - 5:30 p.m.
|
Final remarks
(
Talk
)
|
Alessandra Tosi · Alfredo Vellido · Mauricio Álvarez 🔗 |
Fri 5:30 p.m. - 5:30 p.m.
|
End of workshop
|
🔗 |
Author Information
Alessandra Tosi (Mind Foundry)
Alessandra Tosi is a Machine Learning research scientist at Mind Foundry, an Oxford University spin out company. Her research interest falls in the area of probabilistic models, with a particular focus on Gaussian Process based techniques and latent variable models. She is interested in the underlying geometry of probabilistic models, with a special attention to the behaviour of metrics in Probabilistic Geometries. In her work a great attention is paid to data visualization and interpretability of these models.
Alfredo Vellido (Universitat Politècnica de Catalunya, UPC BarcelonaTech)
Mauricio Álvarez (University of Sheffield)
More from the Same Authors
-
2022 : Taking federated analytics from theory to practice »
Graham Cormode · Alessandra Tosi -
2022 : Lessons from the deployment of data science during the COVID-19 response in Africa. »
Morine Amutorine · Alessandra Tosi -
2022 Workshop: Challenges in Deploying and Monitoring Machine Learning Systems »
Alessandra Tosi · Andrei Paleyes · Christian Cabrera · Fariba Yousefi · S Roberts -
2022 : Opening Remarks »
Alessandra Tosi · Andrei Paleyes -
2021 Poster: Modular Gaussian Processes for Transfer Learning »
Pablo Moreno-Muñoz · Antonio Artes · Mauricio Álvarez -
2021 Poster: Learning Nonparametric Volterra Kernels with Gaussian Processes »
Magnus Ross · Michael T Smith · Mauricio Álvarez -
2021 Poster: Compositional Modeling of Nonlinear Dynamical Systems with ODE-based Random Features »
Thomas McDonald · Mauricio Álvarez -
2020 Poster: Multi-task Causal Learning with Gaussian Processes »
Virginia Aglietti · Theodoros Damoulas · Mauricio Álvarez · Javier González -
2019 Poster: Multi-task Learning for Aggregated Data using Gaussian Processes »
Fariba Yousefi · Michael T Smith · Mauricio Álvarez -
2018 Poster: Heterogeneous Multi-output Gaussian Process Prediction »
Pablo Moreno-Muñoz · Antonio Artés · Mauricio Álvarez -
2018 Spotlight: Heterogeneous Multi-output Gaussian Process Prediction »
Pablo Moreno-Muñoz · Antonio Artés · Mauricio Álvarez -
2017 : Final remarks »
Alessandra Tosi · Alfredo Vellido · Mauricio Álvarez -
2017 : Opening remarks »
Alessandra Tosi · Alfredo Vellido · Mauricio Álvarez -
2017 Poster: Efficient Modeling of Latent Information in Supervised Learning using Gaussian Processes »
Zhenwen Dai · Mauricio Álvarez · Neil Lawrence