Timezone: »
Prediction failures of machine learning models often arise from deficiencies in training data, such as incorrect labels, outliers, and selection biases. However, such data points that are responsible for a given failure mode are generally not known a priori, let alone a mechanism for repairing the failure. This work draws on the Bayesian view of continual learning, and develops a generic framework for both, identifying training examples which have given rise to the target failure, and fixing the model through erasing information about them. This framework naturally allows leveraging recent advances in continual learning to this new problem of model repairment, while subsuming the existing works on influence functions and data deletion as specific instances. Experimentally, the proposed approach outperforms the baselines for both identification of detrimental training data and fixing model failures in a generalisable manner.
Author Information
Ryutaro Tanno (Microsoft Research)
Senior Researcher at Microsoft Research Cambridge, UK PhD in Machine Learning
Melanie F. Pradier (Microsoft Research)
Aditya Nori (Microsoft Research, Cambridge UK)
Yingzhen Li (Imperial College London)
Yingzhen Li is a senior researcher at Microsoft Research Cambridge. She received her PhD from the University of Cambridge, and previously she has interned at Disney Research. She is passionate about building reliable machine learning systems, and her approach combines both Bayesian statistics and deep learning. Her contributions to the approximate inference field include: (1) algorithmic advances, such as variational inference with different divergences, combining variational inference with MCMC and approximate inference with implicit distributions; (2) applications of approximate inference, such as uncertainty estimation in Bayesian neural networks and algorithms to train deep generative models. She has served as area chairs at NeurIPS/ICML/ICLR/AISTATS on related research topics, and she is a co-organizer of the AABI2020 symposium, a flagship event of approximate inference.
More from the Same Authors
-
2021 : Accurate Imputation and Efficient Data Acquisitionwith Transformer-based VAEs »
Sarah Lewis · Tatiana Matejovicova · Yingzhen Li · Angus Lamb · Yordan Zaykov · Miltiadis Allamanis · Cheng Zhang -
2021 : Gradient Clustering for Subtyping of Prediction Failures »
Thomas Henn · Yasukazu Sakamoto · Clément Jacquet · Shunsuke Yoshizawa · Masamichi Andou · Stephen Tchen · Ryosuke Saga · Hiroyuki Ishihara · Katsuhiko Shimizu · Yingzhen Li · Ryutaro Tanno -
2021 : Accurate Imputation and Efficient Data Acquisitionwith Transformer-based VAEs »
Sarah Lewis · Tatiana Matejovicova · Yingzhen Li · Angus Lamb · Yordan Zaykov · Miltiadis Allamanis · Cheng Zhang -
2022 Poster: Scalable Infomin Learning »
Yanzhi Chen · weihao sun · Yingzhen Li · Adrian Weller -
2023 Poster: Energy Discrepancies: A Score-Independent Loss for Energy-Based Models »
Tobias Schröder · Zijing Ou · Jen Lim · Yingzhen Li · Sebastian Vollmer · Andrew Duncan -
2023 Workshop: Deep Generative Models for Health »
Emanuele Palumbo · Laura Manduchi · Sonia Laguna · Melanie F. Pradier · Vincent Fortuin · Stephan Mandt · Julia Vogt -
2022 Workshop: I Can’t Believe It’s Not Better: Understanding Deep Learning Through Empirical Falsification »
Arno Blaas · Sahra Ghalebikesabi · Javier Antorán · Fan Feng · Melanie F. Pradier · Ian Mason · David Rohde -
2022 : Poster session 1 »
Yingzhen Li -
2022 Workshop: NeurIPS 2022 Workshop on Score-Based Methods »
Yingzhen Li · Yang Song · Valentin De Bortoli · Francois-Xavier Briol · Wenbo Gong · Alexia Jolicoeur-Martineau · Arash Vahdat -
2022 Poster: Learning Neural Set Functions Under the Optimal Subset Oracle »
Zijing Ou · Tingyang Xu · Qinliang Su · Yingzhen Li · Peilin Zhao · Yatao Bian -
2021 Workshop: Bridging the Gap: from Machine Learning Research to Clinical Practice »
Julia Vogt · Ece Ozkan · Sonali Parbhoo · Melanie F. Pradier · Patrick Schwab · Shengpu Tang · Mario Wieser · Jiayu Yao -
2021 Workshop: Bayesian Deep Learning »
Yarin Gal · Yingzhen Li · Sebastian Farquhar · Christos Louizos · Eric Nalisnick · Andrew Gordon Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2021 Workshop: I (Still) Can't Believe It's Not Better: A workshop for “beautiful” ideas that "should" have worked »
Aaron Schein · Melanie F. Pradier · Jessica Forde · Stephanie Hyland · Francisco Ruiz -
2021 Poster: Sparse Uncertainty Representation in Deep Learning with Inducing Weights »
Hippolyt Ritter · Martin Kukla · Cheng Zhang · Yingzhen Li -
2021 : Evaluating Approximate Inference in Bayesian Deep Learning + Q&A »
Andrew Gordon Wilson · Pavel Izmailov · Matthew Hoffman · Yarin Gal · Yingzhen Li · Melanie F. Pradier · Sharad Vikram · Andrew Foong · Sanae Lotfi · Sebastian Farquhar -
2020 Workshop: I Can’t Believe It’s Not Better! Bridging the gap between theory and empiricism in probabilistic machine learning »
Jessica Forde · Francisco Ruiz · Melanie Fernandez Pradier · Aaron Schein · Finale Doshi-Velez · Isabel Valera · David Blei · Hanna Wallach -
2020 : Intro »
Aaron Schein · Melanie F. Pradier -
2020 Poster: Disentangling Human Error from Ground Truth in Segmentation of Medical Images »
Le Zhang · Ryutaro Tanno · Mou-Cheng Xu · Chen Jin · Joseph Jacob · Olga Cicarrelli · Frederik Barkhof · Daniel Alexander -
2020 Poster: On the Expressiveness of Approximate Inference in Bayesian Neural Networks »
Andrew Foong · David Burt · Yingzhen Li · Richard Turner -
2020 Tutorial: (Track1) Advances in Approximate Inference »
Yingzhen Li · Cheng Zhang -
2018 : Poster Session »
Lorenzo Masoero · Tammo Rukat · Runjing Liu · Sayak Ray Chowdhury · Daniel Coelho de Castro · Claudia Wehrhahn · Feras Saad · Archit Verma · Kelvin Hsu · Irineo Cabreros · Sandhya Prabhakaran · Yiming Sun · Maxime Rischard · Linfeng Liu · Adam Farooq · Jeremiah Liu · Melanie F. Pradier · Diego Romeres · Neill Campbell · Kai Xu · Mehmet M Dundar · Tucker Keuter · Prashnna Gyawali · Eli Sennesh · Alessandro De Palma · Daniel Flam-Shepherd · Takatomi Kubo -
2016 Poster: Measuring Neural Net Robustness with Constraints »
Osbert Bastani · Yani Ioannou · Leonidas Lampropoulos · Dimitrios Vytiniotis · Aditya Nori · Antonio Criminisi