Timezone: »
Energy-based modeling is a promising approach to unsupervised learning, which yields many downstream applications from a single model. The main difficulty in learning energy-based models with the "contrastive approaches" is the generation of samples from the current energy function at each iteration. Many advances have been made to accomplish this subroutine cheaply. Nevertheless, all such sampling paradigms run MCMC targeting the current model, which requires infinitely long chains to generate samples from the true energy distribution and is problematic in practice. This paper proposes an alternative approach to getting these samples and avoiding crude MCMC sampling from the current model. We accomplish this by viewing the evolution of the modeling distribution as (i) the evolution of the energy function, and (ii) the evolution of the samples from this distribution along some vector field. We subsequently derive this time-dependent vector field such that the particles following this field are approximately distributed as the current density model. Thereby we match the evolution of the particles with the evolution of the energy function prescribed by the learning procedure. Importantly, unlike Monte Carlo sampling, our method targets to match the current distribution in a finite time. Finally, we demonstrate its effectiveness empirically comparing to MCMC-based learning methods.
Author Information
Kirill Neklyudov (Vector Institute)
Priyank Jaini (University of Amsterdam)
Max Welling (University of Amsterdam / Qualcomm AI Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 : Particle Dynamics for Learning EBMs »
Tue. Dec 14th 03:10 -- 03:20 PM Room
More from the Same Authors
-
2022 : Program Synthesis for Integer Sequence Generation »
Natasha Butt · Auke Wiggers · Taco Cohen · Max Welling -
2022 : Action Matching: A Variational Method for Learning Stochastic Dynamics from Samples »
Kirill Neklyudov · Daniel Severo · Alireza Makhzani -
2022 : Invited Talk #4, The Fifth Paradigm of Scientific Discovery, Max Welling »
Max Welling -
2021 : General Discussion 1 - What is out of distribution (OOD) generalization and why is it important? with Yoshua Bengio, Leyla Isik, Max Welling »
Yoshua Bengio · Leyla Isik · Max Welling · Joshua T Vogelstein · Weiwei Yang -
2021 : Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders »
T. Anderson Keller · Qinghe Gao · Max Welling -
2021 : Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders »
T. Anderson Keller · Qinghe Gao · Max Welling -
2021 Workshop: AI for Science: Mind the Gaps »
Payal Chandak · Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Gabriel Spadon · Max Tegmark · Hanchen Wang · Adrian Weller · Max Welling · Marinka Zitnik -
2021 Poster: Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions »
Emiel Hoogeboom · Didrik Nielsen · Priyank Jaini · Patrick Forré · Max Welling -
2021 Poster: Topographic VAEs learn Equivariant Capsules »
T. Anderson Keller · Max Welling -
2021 Poster: Learning Equivariant Energy Based Models with Equivariant Stein Variational Gradient Descent »
Priyank Jaini · Lars Holdijk · Max Welling -
2021 Poster: E(n) Equivariant Normalizing Flows »
Victor Garcia Satorras · Emiel Hoogeboom · Fabian Fuchs · Ingmar Posner · Max Welling -
2021 Poster: Modality-Agnostic Topology Aware Localization »
Farhad Ghazvinian Zanjani · Ilia Karmanov · Hanno Ackermann · Daniel Dijkman · Simone Merlin · Max Welling · Fatih Porikli -
2021 Oral: E(n) Equivariant Normalizing Flows »
Victor Garcia Satorras · Emiel Hoogeboom · Fabian Fuchs · Ingmar Posner · Max Welling -
2020 Poster: SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows »
Didrik Nielsen · Priyank Jaini · Emiel Hoogeboom · Ole Winther · Max Welling -
2020 Oral: SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows »
Didrik Nielsen · Priyank Jaini · Emiel Hoogeboom · Ole Winther · Max Welling -
2019 : Keynote - ML »
Max Welling -
2019 Poster: The Implicit Metropolis-Hastings Algorithm »
Kirill Neklyudov · Evgenii Egorov · Dmitry Vetrov -
2017 Poster: Causal Effect Inference with Deep Latent-Variable Models »
Christos Louizos · Uri Shalit · Joris Mooij · David Sontag · Richard Zemel · Max Welling -
2017 Poster: Bayesian Compression for Deep Learning »
Christos Louizos · Karen Ullrich · Max Welling -
2017 Poster: Structured Bayesian Pruning via Log-Normal Multiplicative Noise »
Kirill Neklyudov · Dmitry Molchanov · Arsenii Ashukha · Dmitry Vetrov -
2016 Workshop: Bayesian Deep Learning »
Yarin Gal · Christos Louizos · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2015 Poster: Bayesian dark knowledge »
Anoop Korattikara Balan · Vivek Rathod · Kevin Murphy · Max Welling