Timezone: »

 
Poster
Repairing Neural Networks by Leaving the Right Past Behind
Ryutaro Tanno · Melanie F. Pradier · Aditya Nori · Yingzhen Li

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #522

Prediction failures of machine learning models often arise from deficiencies in training data, such as incorrect labels, outliers, and selection biases. However, such data points that are responsible for a given failure mode are generally not known a priori, let alone a mechanism for repairing the failure. This work draws on the Bayesian view of continual learning, and develops a generic framework for both, identifying training examples which have given rise to the target failure, and fixing the model through erasing information about them. This framework naturally allows leveraging recent advances in continual learning to this new problem of model repairment, while subsuming the existing works on influence functions and data deletion as specific instances. Experimentally, the proposed approach outperforms the baselines for both identification of detrimental training data and fixing model failures in a generalisable manner.

Author Information

Ryutaro Tanno (Microsoft Research)

Senior Researcher at Microsoft Research Cambridge, UK PhD in Machine Learning

Melanie F. Pradier (Microsoft Research)
Aditya Nori (Microsoft Research, Cambridge UK)
Yingzhen Li (Imperial College London)

Yingzhen Li is a senior researcher at Microsoft Research Cambridge. She received her PhD from the University of Cambridge, and previously she has interned at Disney Research. She is passionate about building reliable machine learning systems, and her approach combines both Bayesian statistics and deep learning. Her contributions to the approximate inference field include: (1) algorithmic advances, such as variational inference with different divergences, combining variational inference with MCMC and approximate inference with implicit distributions; (2) applications of approximate inference, such as uncertainty estimation in Bayesian neural networks and algorithms to train deep generative models. She has served as area chairs at NeurIPS/ICML/ICLR/AISTATS on related research topics, and she is a co-organizer of the AABI2020 symposium, a flagship event of approximate inference.

More from the Same Authors