Timezone: »
Modern machine learning models are complicated. Most of them rely on convoluted latent representations of their input to issue a prediction. To achieve greater transparency than a black-box that connects inputs to predictions, it is necessary to gain a deeper understanding of these latent representations. To that aim, we propose SimplEx: a user-centred method that provides example-based explanations with reference to a freely selected set of examples, called the corpus. SimplEx uses the corpus to improve the user’s understanding of the latent space with post-hoc explanations answering two questions: (1) Which corpus examples explain the prediction issued for a given test example? (2) What features of these corpus examples are relevant for the model to relate them to the test example? SimplEx provides an answer by reconstructing the test latent representation as a mixture of corpus latent representations. Further, we propose a novel approach, the integrated Jacobian, that allows SimplEx to make explicit the contribution of each corpus feature in the mixture. Through experiments on tasks ranging from mortality prediction to image classification, we demonstrate that these decompositions are robust and accurate. With illustrative use cases in medicine, we show that SimplEx empowers the user by highlighting relevant patterns in the corpus that explain model representations. Moreover, we demonstrate how the freedom in choosing the corpus allows the user to have personalized explanations in terms of examples that are meaningful for them.
Author Information
Jonathan Crabbe (University of Cambridge)
Zhaozhi Qian (University of Cambridge)
Fergus Imrie (University of California, Los Angeles)
Mihaela van der Schaar (University of Cambridge)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Explaining Latent Representations with a Corpus of Examples »
Fri. Dec 10th 04:30 -- 06:00 PM Room
More from the Same Authors
-
2021 Spotlight: On Inductive Biases for Heterogeneous Treatment Effect Estimation »
Alicia Curth · Mihaela van der Schaar -
2021 : Really Doing Great at Estimating CATE? A Critical Look at ML Benchmarking Practices in Treatment Effect Estimation »
Alicia Curth · David Svensson · Jim Weatherall · Mihaela van der Schaar -
2021 : The Medkit-Learn(ing) Environment: Medical Decision Modelling through Simulation »
Alex Chan · Ioana Bica · Alihan Hüyük · Daniel Jarrett · Mihaela van der Schaar -
2022 : D-CIPHER: Discovery of Closed-form Partial Differential Equations »
Krzysztof Kacprzyk · Zhaozhi Qian · Mihaela van der Schaar -
2023 Poster: Can you rely on your model evaluation? Improving model evaluation with synthetic test data »
Nabeel Seedat · Boris van Breugel · Fergus Imrie · Mihaela van der Schaar -
2023 Poster: D-CIPHER: Discovery of Closed-form Partial Differential Equations »
Krzysztof Kacprzyk · Zhaozhi Qian · Mihaela van der Schaar -
2023 Poster: TRIAGE: Characterizing and auditing training data for improved regression »
Nabeel Seedat · Jonathan Crabbé · Zhaozhi Qian · Mihaela van der Schaar -
2023 Poster: Joint Training of Deep Ensembles Fails Due to Learner Collusion »
Alan Jeffares · Tennison Liu · Jonathan Crabbé · Mihaela van der Schaar -
2023 Poster: Evaluating the Robustness of Interpretability Methods through Explanation Invariance and Equivariance »
Jonathan Crabbé · Mihaela van der Schaar -
2023 Poster: Synthcity: a benchmark framework for diverse use cases of tabular synthetic data »
Zhaozhi Qian · Rob Davis · Mihaela van der Schaar -
2023 Workshop: Synthetic Data Generation with Generative AI »
Sergul Aydore · Zhaozhi Qian · Mihaela van der Schaar -
2022 : Achievements and Challenges Part 2/2 »
Zhaozhi Qian · Tucker Balch · Sergul Aydore -
2022 Workshop: Synthetic Data for Empowering ML Research »
Mihaela van der Schaar · Zhaozhi Qian · Sergul Aydore · Dimitris Vlitas · Dino Oglic · Tucker Balch -
2022 Poster: Concept Activation Regions: A Generalized Framework For Concept-Based Explanations »
Jonathan Crabbé · Mihaela van der Schaar -
2022 Poster: Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability »
Jonathan Crabbé · Alicia Curth · Ioana Bica · Mihaela van der Schaar -
2022 Poster: Data-IQ: Characterizing subgroups with heterogeneous outcomes in tabular data »
Nabeel Seedat · Jonathan Crabbé · Ioana Bica · Mihaela van der Schaar -
2022 Poster: Composite Feature Selection Using Deep Ensembles »
Fergus Imrie · Alexander Norcliffe · Pietro Lió · Mihaela van der Schaar -
2021 Poster: Invariant Causal Imitation Learning for Generalizable Policies »
Ioana Bica · Daniel Jarrett · Mihaela van der Schaar -
2021 Poster: Time-series Generation by Contrastive Imitation »
Daniel Jarrett · Ioana Bica · Mihaela van der Schaar -
2021 Poster: Closing the loop in medical decision support by understanding clinical decision-making: A case study on organ transplantation »
Yuchao Qin · Fergus Imrie · Alihan Hüyük · Daniel Jarrett · alexander gimson · Mihaela van der Schaar -
2021 Poster: DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks »
Boris van Breugel · Trent Kyono · Jeroen Berrevoets · Mihaela van der Schaar -
2021 Poster: MIRACLE: Causally-Aware Imputation via Learning Missing Data Mechanisms »
Trent Kyono · Yao Zhang · Alexis Bellot · Mihaela van der Schaar -
2021 Poster: Conformal Time-series Forecasting »
Kamile Stankeviciute · Ahmed Alaa · Mihaela van der Schaar -
2021 Poster: Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease Progression »
Zhaozhi Qian · William Zame · Lucas Fleuren · Paul Elbers · Mihaela van der Schaar -
2021 Poster: SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data »
Alicia Curth · Changhee Lee · Mihaela van der Schaar -
2021 Poster: On Inductive Biases for Heterogeneous Treatment Effect Estimation »
Alicia Curth · Mihaela van der Schaar -
2021 Poster: SyncTwin: Treatment Effect Estimation with Longitudinal Outcomes »
Zhaozhi Qian · Yao Zhang · Ioana Bica · Angela Wood · Mihaela van der Schaar -
2021 Poster: Estimating Multi-cause Treatment Effects via Single-cause Perturbation »
Zhaozhi Qian · Alicia Curth · Mihaela van der Schaar -
2020 Poster: Learning outside the Black-Box: The pursuit of interpretable models »
Jonathan Crabbe · Yao Zhang · William Zame · Mihaela van der Schaar -
2020 : When and How to Lift the Lockdown? Global COVID-19 Scenario Analysis and Policy Assessment using Compartmental Gaussian Processes »
Zhaozhi Qian -
2020 Poster: When and How to Lift the Lockdown? Global COVID-19 Scenario Analysis and Policy Assessment using Compartmental Gaussian Processes »
Zhaozhi Qian · Ahmed Alaa · Mihaela van der Schaar -
2020 Oral: When and How to Lift the Lockdown? Global COVID-19 Scenario Analysis and Policy Assessment using Compartmental Gaussian Processes »
Zhaozhi Qian · Ahmed Alaa · Mihaela van der Schaar -
2019 Poster: Time-series Generative Adversarial Networks »
Jinsung Yoon · Daniel Jarrett · Mihaela van der Schaar -
2016 Poster: Balancing Suspense and Surprise: Timely Decision Making with Endogenous Information Acquisition »
Ahmed Alaa · Mihaela van der Schaar -
2016 Poster: A Non-parametric Learning Method for Confidently Estimating Patient's Clinical State and Dynamics »
William Hoiles · Mihaela van der Schaar -
2014 Poster: Discovering, Learning and Exploiting Relevance »
Cem Tekin · Mihaela van der Schaar