Timezone: »
Recent works propose using the discriminator of a GAN to filter out unrealistic samples of the generator. We generalize these ideas by introducing the implicit Metropolis-Hastings algorithm. For any implicit probabilistic model and a target distribution represented by a set of samples, implicit Metropolis-Hastings operates by learning a discriminator to estimate the density-ratio and then generating a chain of samples. Since the approximation of density ratio introduces an error on every step of the chain, it is crucial to analyze the stationary distribution of such chain. For that purpose, we present a theoretical result stating that the discriminator loss upper bounds the total variation distance between the target distribution and the stationary distribution. Finally, we validate the proposed algorithm both for independent and Markov proposals on CIFAR-10, CelebA, ImageNet datasets.
Author Information
Kirill Neklyudov (Samsung AI Center, Moscow)
Evgenii Egorov (Skolkovo Institute of Science and Technology)
Dmitry Vetrov (Higher School of Economics, Samsung AI Center, Moscow)
More from the Same Authors
-
2021 : Particle Dynamics for Learning EBMs »
Kirill Neklyudov · Priyank Jaini · Max Welling -
2022 Poster: HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks »
Aibek Alanov · Vadim Titov · Dmitry Vetrov -
2022 : Action Matching: A Variational Method for Learning Stochastic Dynamics from Samples »
Kirill Neklyudov · Daniel Severo · Alireza Makhzani -
2023 Poster: Star-Shaped Denoising Diffusion Probabilistic Models »
Andrey Okhotin · Dmitry Molchanov · Arkhipkin Vladimir · Grigory Bartosh · Viktor Ohanesian · Aibek Alanov · Dmitry Vetrov -
2023 Poster: Wasserstein Quantum Monte Carlo: A Novel Approach for Solving the Quantum Many-Body Schrödinger Equation »
Kirill Neklyudov · Jannes Nys · Luca Thiede · Juan Carrasquilla · Qiang Liu · Max Welling · Alireza Makhzani -
2023 Poster: Entropic Neural Optimal Transport via Diffusion Processes »
Nikita Gushchin · Alexander Kolesov · Alexander Korotin · Dmitry Vetrov · Evgeny Burnaev -
2023 Poster: To Stay or Not to Stay in the Pre-train Basin: Insights on Ensembling in Transfer Learning »
Ildus Sadrtdinov · Dmitrii Pozdeev · Dmitry Vetrov · Ekaterina Lobacheva -
2023 Oral: Entropic Neural Optimal Transport via Diffusion Processes »
Nikita Gushchin · Alexander Kolesov · Alexander Korotin · Dmitry Vetrov · Evgeny Burnaev -
2022 Spotlight: Lightning Talks 3B-2 »
Yu Huang · Tero Karras · Maxim Kodryan · Shiau Hong Lim · Shudong Huang · Ziyu Wang · Siqiao Xue · ILYAS MALIK · Ekaterina Lobacheva · Miika Aittala · Hongjie Wu · Yuhao Zhou · Yingbin Liang · Xiaoming Shi · Jun Zhu · Maksim Nakhodnov · Timo Aila · Yazhou Ren · James Zhang · Longbo Huang · Dmitry Vetrov · Ivor Tsang · Hongyuan Mei · Samuli Laine · Zenglin Xu · Wentao Feng · Jiancheng Lv -
2022 Spotlight: HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks »
Aibek Alanov · Vadim Titov · Dmitry Vetrov -
2022 Spotlight: Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes »
Maxim Kodryan · Ekaterina Lobacheva · Maksim Nakhodnov · Dmitry Vetrov -
2022 Spotlight: Lightning Talks 3B-1 »
Tianying Ji · Tongda Xu · Giulia Denevi · Aibek Alanov · Martin Wistuba · Wei Zhang · Yuesong Shen · Massimiliano Pontil · Vadim Titov · Yan Wang · Yu Luo · Daniel Cremers · Yanjun Han · Arlind Kadra · Dailan He · Josif Grabocka · Zhengyuan Zhou · Fuchun Sun · Carlo Ciliberto · Dmitry Vetrov · Mingxuan Jing · Chenjian Gao · Aaron Flores · Tsachy Weissman · Han Gao · Fengxiang He · Kunzan Liu · Wenbing Huang · Hongwei Qin -
2022 Poster: Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes »
Maxim Kodryan · Ekaterina Lobacheva · Maksim Nakhodnov · Dmitry Vetrov -
2021 : Particle Dynamics for Learning EBMs »
Kirill Neklyudov · Priyank Jaini · Max Welling -
2021 Poster: Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces »
Kirill Struminsky · Artyom Gadetsky · Denis Rakitin · Danil Karpushkin · Dmitry Vetrov -
2021 Poster: On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay »
Ekaterina Lobacheva · Maxim Kodryan · Nadezhda Chirkova · Andrey Malinin · Dmitry Vetrov -
2021 Poster: BooVAE: Boosting Approach for Continual Learning of VAE »
Evgenii Egorov · Anna Kuzina · Evgeny Burnaev -
2020 Poster: On Power Laws in Deep Ensembles »
Ekaterina Lobacheva · Nadezhda Chirkova · Maxim Kodryan · Dmitry Vetrov -
2020 Spotlight: On Power Laws in Deep Ensembles »
Ekaterina Lobacheva · Nadezhda Chirkova · Maxim Kodryan · Dmitry Vetrov -
2019 Poster: Importance Weighted Hierarchical Variational Inference »
Artem Sobolev · Dmitry Vetrov -
2019 Poster: A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models »
Maxim Kuznetsov · Daniil Polykovskiy · Dmitry Vetrov · Alex Zhebrak -
2019 Poster: A Simple Baseline for Bayesian Uncertainty in Deep Learning »
Wesley Maddox · Pavel Izmailov · Timur Garipov · Dmitry Vetrov · Andrew Gordon Wilson -
2018 : TBC 2 »
Dmitry Vetrov -
2018 Poster: Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs »
Timur Garipov · Pavel Izmailov · Dmitrii Podoprikhin · Dmitry Vetrov · Andrew Wilson -
2018 Spotlight: Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs »
Timur Garipov · Pavel Izmailov · Dmitrii Podoprikhin · Dmitry Vetrov · Andrew Wilson -
2017 Poster: Structured Bayesian Pruning via Log-Normal Multiplicative Noise »
Kirill Neklyudov · Dmitry Molchanov · Arsenii Ashukha · Dmitry Vetrov -
2016 Poster: PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions »
Mikhail Figurnov · Aizhan Ibraimova · Dmitry Vetrov · Pushmeet Kohli -
2015 Poster: M-Best-Diverse Labelings for Submodular Energies and Beyond »
Alexander Kirillov · Dmytro Shlezinger · Dmitry Vetrov · Carsten Rother · Bogdan Savchynskyy -
2015 Poster: Tensorizing Neural Networks »
Alexander Novikov · Dmitrii Podoprikhin · Anton Osokin · Dmitry Vetrov