Timezone: »
Energy-based models are a simple yet powerful class of probabilistic models, but their widespread adoption has been limited by the computational burden of training them. We propose a novel loss function called Energy Discrepancy (ED) which does not rely on the computation of scores or expensive Markov chain Monte Carlo. We show that energy discrepancy approaches the explicit score matching and negative log-likelihood loss under different limits, effectively interpolating between both. Consequently, minimum energy discrepancy estimation overcomes the problem of nearsightedness encountered in score-based estimation methods, while also enjoying theoretical guarantees. Through numerical experiments, we demonstrate that ED learns low-dimensional data distributions faster and more accurately than explicit score matching or contrastive divergence. For high-dimensional image data, we describe how the manifold hypothesis puts limitations on our approach and demonstrate the effectiveness of energy discrepancy by training the energy-based model as a prior of a variational decoder model.
Author Information
Tobias Schröder (Imperial College London)
Zijing Ou (Imperial College London)
Jen Lim (The University of Warwick)
Yingzhen Li (Imperial College London)
Yingzhen Li is a senior researcher at Microsoft Research Cambridge. She received her PhD from the University of Cambridge, and previously she has interned at Disney Research. She is passionate about building reliable machine learning systems, and her approach combines both Bayesian statistics and deep learning. Her contributions to the approximate inference field include: (1) algorithmic advances, such as variational inference with different divergences, combining variational inference with MCMC and approximate inference with implicit distributions; (2) applications of approximate inference, such as uncertainty estimation in Bayesian neural networks and algorithms to train deep generative models. She has served as area chairs at NeurIPS/ICML/ICLR/AISTATS on related research topics, and she is a co-organizer of the AABI2020 symposium, a flagship event of approximate inference.
Sebastian Vollmer (DFKI)
Andrew Duncan (Imperial College London)
More from the Same Authors
-
2020 : Probabilistic Adjoint Sensitivity Analysis for Fast Calibration of Partial Differential Equation Models »
Jonathan Cockayne · Andrew Duncan -
2020 : Bayesian polynomial chaos »
Pranay Seshadri · Andrew Duncan · Ashley Scillitoe -
2021 : Accurate Imputation and Efficient Data Acquisitionwith Transformer-based VAEs »
Sarah Lewis · Tatiana Matejovicova · Yingzhen Li · Angus Lamb · Yordan Zaykov · Miltiadis Allamanis · Cheng Zhang -
2021 : Accurate Imputation and Efficient Data Acquisitionwith Transformer-based VAEs »
Sarah Lewis · Tatiana Matejovicova · Yingzhen Li · Angus Lamb · Yordan Zaykov · Miltiadis Allamanis · Cheng Zhang -
2022 Poster: Scalable Infomin Learning »
Yanzhi Chen · weihao sun · Yingzhen Li · Adrian Weller -
2022 : Using Perturbation to Improve Goodness-of-Fit Tests based on Kernelized Stein Discrepancy »
Xing Liu · Andrew Duncan · Axel Gandy -
2022 : Poster session 1 »
Yingzhen Li -
2022 Workshop: NeurIPS 2022 Workshop on Score-Based Methods »
Yingzhen Li · Yang Song · Valentin De Bortoli · Francois-Xavier Briol · Wenbo Gong · Alexia Jolicoeur-Martineau · Arash Vahdat -
2022 Poster: Repairing Neural Networks by Leaving the Right Past Behind »
Ryutaro Tanno · Melanie F. Pradier · Aditya Nori · Yingzhen Li -
2022 Poster: Learning Neural Set Functions Under the Optimal Subset Oracle »
Zijing Ou · Tingyang Xu · Qinliang Su · Yingzhen Li · Peilin Zhao · Yatao Bian -
2021 Workshop: Bayesian Deep Learning »
Yarin Gal · Yingzhen Li · Sebastian Farquhar · Christos Louizos · Eric Nalisnick · Andrew Gordon Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2021 Poster: Sparse Uncertainty Representation in Deep Learning with Inducing Weights »
Hippolyt Ritter · Martin Kukla · Cheng Zhang · Yingzhen Li -
2021 : Evaluating Approximate Inference in Bayesian Deep Learning + Q&A »
Andrew Gordon Wilson · Pavel Izmailov · Matthew Hoffman · Yarin Gal · Yingzhen Li · Melanie F. Pradier · Sharad Vikram · Andrew Foong · Sanae Lotfi · Sebastian Farquhar -
2020 Poster: On the Expressiveness of Approximate Inference in Bayesian Neural Networks »
Andrew Foong · David Burt · Yingzhen Li · Richard Turner -
2020 Tutorial: (Track1) Advances in Approximate Inference »
Yingzhen Li · Cheng Zhang -
2019 Poster: Minimum Stein Discrepancy Estimators »
Alessandro Barp · Francois-Xavier Briol · Andrew Duncan · Mark Girolami · Lester Mackey