Timezone: »
Explanations of time series models are useful for high stakes applications like healthcare but have received little attention in machine learning literature. We propose FIT, a framework that evaluates the importance of observations for a multivariate time-series black-box model by quantifying the shift in the predictive distribution over time. FIT defines the importance of an observation based on its contribution to the distributional shift under a KL-divergence that contrasts the predictive distribution against a counterfactual where the rest of the features are unobserved. We also demonstrate the need to control for time-dependent distribution shifts. We compare with state-of-the-art baselines on simulated and real-world clinical data and demonstrate that our approach is superior in identifying important time points and observations throughout the time series.
Author Information
Sana Tonekaboni (University of Toronto / Vector Institute)
Shalmali Joshi (Harvard University (SEAS))
Kieran Campbell (University of British Columbia)
David Duvenaud (University of Toronto)
David Duvenaud is an assistant professor in computer science at the University of Toronto. His research focuses on continuous-time models, latent-variable models, and deep learning. His postdoc was done at Harvard University, and his Ph.D. at the University of Cambridge. David also co-founded Invenia, an energy forecasting and trading company.
Anna Goldenberg (University of Toronto)
More from the Same Authors
-
2022 : Feature Restricted Group Dropout for Robust Electronic Health Record Predictions »
Bret Nestor · Anna Goldenberg · Marzyeh Ghassemi -
2022 : "Why did the Model Fail?": Attributing Model Performance Changes to Distribution Shifts »
Haoran Zhang · Harvineet Singh · Marzyeh Ghassemi · Shalmali Joshi -
2022 : Multi-objective Bayesian Optimization with Heuristic Objectives for Biomedical and Molecular Data Analysis Workflows »
Alina Selega · Kieran Campbell -
2022 : Volume-based Performance not Guaranteed by Promising Patch-based Results in Medical Imaging »
Abhishek Moturu · Sayali Joshi · Andrea Doria · Anna Goldenberg -
2022 : Dissecting In-the-Wild Stress from Multimodal Sensor Data »
Sujay Nagaraj · Thomas Hartvigsen · Adrian Boch · Luca Foschini · Marzyeh Ghassemi · Sarah Goodday · Stephen Friend · Anna Goldenberg -
2022 Workshop: The Symbiosis of Deep Learning and Differential Equations II »
Michael Poli · Winnie Xu · Estefany Kelly Buchanan · Maryam Hosseini · Luca Celotti · Martin Magill · Ermal Rrapaj · Qiyao Wei · Stefano Massaroli · Patrick Kidger · Archis Joglekar · Animesh Garg · David Duvenaud -
2022 : Modeling Heart Rate Response to Exercise with Wearables Data »
Achille Nazaret · Sana Tonekaboni · Gregory Darnell · Shirley Ren · Guillermo Sapiro · Andrew Miller -
2022 : Continual Learning on Auxiliary tasks via Replayed Experiences: CLARE »
Bohdan Naida · Addison Weatherhead · Sana Tonekaboni · Anna Goldenberg -
2022 Workshop: Learning from Time Series for Health »
Sana Tonekaboni · Thomas Hartvigsen · Satya Narayan Shukla · Gunnar Rätsch · Marzyeh Ghassemi · Anna Goldenberg -
2021 : Dependent Types for Machine Learning in Dex - David Duvenaud - University of Toronto »
David Duvenaud · AIPLANS 2021 -
2021 Poster: Towards Robust and Reliable Algorithmic Recourse »
Sohini Upadhyay · Shalmali Joshi · Himabindu Lakkaraju -
2021 Poster: Meta-learning to Improve Pre-training »
Aniruddh Raghu · Jonathan Lorraine · Simon Kornblith · Matthew McDermott · David Duvenaud -
2020 : Panel discussion 2 »
Danielle S Bassett · Yoshua Bengio · Cristina Savin · David Duvenaud · Anna Choromanska · Yanping Huang -
2020 : Invited Talk David Duvenaud »
David Duvenaud -
2020 Tutorial: (Track3) Deep Implicit Layers: Neural ODEs, Equilibrium Models, and Differentiable Optimization Q&A »
David Duvenaud · J. Zico Kolter · Matthew Johnson -
2020 Poster: Learning Differential Equations that are Easy to Solve »
Jacob Kelly · Jesse Bettencourt · Matthew Johnson · David Duvenaud -
2020 Tutorial: (Track3) Deep Implicit Layers: Neural ODEs, Equilibrium Models, and Differentiable Optimization »
David Duvenaud · J. Zico Kolter · Matthew Johnson -
2019 Workshop: Program Transformations for ML »
Pascal Lamblin · Atilim Gunes Baydin · Alexander Wiltschko · Bart van Merriënboer · Emily Fertig · Barak Pearlmutter · David Duvenaud · Laurent Hascoet -
2019 : Molecules and Genomes »
David Haussler · Djork-Arné Clevert · Michael Keiser · Alan Aspuru-Guzik · David Duvenaud · David Jones · Jennifer Wei · Alexander D'Amour -
2019 Poster: Latent Ordinary Differential Equations for Irregularly-Sampled Time Series »
Yulia Rubanova · Tian Qi Chen · David Duvenaud -
2019 Poster: Residual Flows for Invertible Generative Modeling »
Tian Qi Chen · Jens Behrmann · David Duvenaud · Joern-Henrik Jacobsen -
2019 Spotlight: Residual Flows for Invertible Generative Modeling »
Tian Qi Chen · Jens Behrmann · David Duvenaud · Joern-Henrik Jacobsen -
2019 Poster: Efficient Graph Generation with Graph Recurrent Attention Networks »
Renjie Liao · Yujia Li · Yang Song · Shenlong Wang · Will Hamilton · David Duvenaud · Raquel Urtasun · Richard Zemel -
2019 Poster: Neural Networks with Cheap Differential Operators »
Tian Qi Chen · David Duvenaud -
2019 Spotlight: Neural Networks with Cheap Differential Operators »
Tian Qi Chen · David Duvenaud -
2018 : Software Panel »
Ben Letham · David Duvenaud · Dustin Tran · Aki Vehtari -
2018 Poster: Isolating Sources of Disentanglement in Variational Autoencoders »
Tian Qi Chen · Xuechen (Chen) Li · Roger Grosse · David Duvenaud -
2018 Oral: Isolating Sources of Disentanglement in Variational Autoencoders »
Tian Qi Chen · Xuechen (Chen) Li · Roger Grosse · David Duvenaud -
2018 Poster: Neural Ordinary Differential Equations »
Tian Qi Chen · Yulia Rubanova · Jesse Bettencourt · David Duvenaud -
2018 Oral: Neural Ordinary Differential Equations »
Tian Qi Chen · Yulia Rubanova · Jesse Bettencourt · David Duvenaud -
2017 Workshop: Aligned Artificial Intelligence »
Dylan Hadfield-Menell · Jacob Steinhardt · David Duvenaud · David Krueger · Anca Dragan -
2017 : Automatic Chemical Design Using a Data-driven Continuous Representation of Molecules »
David Duvenaud -
2017 Poster: Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference »
Geoffrey Roeder · Yuhuai Wu · David Duvenaud -
2016 : Generating Class-conditional Images with Gradient-based Inference »
David Duvenaud -
2016 : David Duvenaud – No more mini-languages: The power of autodiffing full-featured Python »
David Duvenaud -
2016 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy Liang -
2016 Poster: Composing graphical models with neural networks for structured representations and fast inference »
Matthew Johnson · David Duvenaud · Alex Wiltschko · Ryan Adams · Sandeep R Datta -
2016 Poster: Probing the Compositionality of Intuitive Functions »
Eric Schulz · Josh Tenenbaum · David Duvenaud · Maarten Speekenbrink · Samuel J Gershman -
2015 : *David Duvenaud* Automatic Differentiation: The most criminally underused tool in probabilistic numerics »
David Duvenaud -
2015 Poster: Convolutional Networks on Graphs for Learning Molecular Fingerprints »
David Duvenaud · Dougal Maclaurin · Jorge Iparraguirre · Rafael Bombarell · Timothy Hirzel · Alan Aspuru-Guzik · Ryan Adams -
2014 Poster: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2014 Oral: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2012 Poster: Active Learning of Model Evidence Using Bayesian Quadrature »
Michael A Osborne · David Duvenaud · Roman Garnett · Carl Edward Rasmussen · Stephen J Roberts · Zoubin Ghahramani -
2011 Poster: Additive Gaussian Processes »
David Duvenaud · Hannes Nickisch · Carl Edward Rasmussen