Timezone: »
Diffusion-based generative models have demonstrated a capacity for perceptually impressive synthesis, but can they also be great likelihood-based models? We answer this in the affirmative, and introduce a family of diffusion-based generative models that obtain state-of-the-art likelihoods on standard image density estimation benchmarks. Unlike other diffusion-based models, our method allows for efficient optimization of the noise schedule jointly with the rest of the model. We show that the variational lower bound (VLB) simplifies to a remarkably short expression in terms of the signal-to-noise ratio of the diffused data, thereby improving our theoretical understanding of this model class. Using this insight, we prove an equivalence between several models proposed in the literature. In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints. This enables us to learn a noise schedule that minimizes the variance of the resulting VLB estimator, leading to faster optimization. Combining these advances with architectural improvements, we obtain state-of-the-art likelihoods on image density estimation benchmarks, outperforming autoregressive models that have dominated these benchmarks for many years, with often significantly faster optimization. In addition, we show how to use the model as part of a bits-back compression scheme, and demonstrate lossless compression rates close to the theoretical optimum.
Author Information
Diederik Kingma (Google)
Tim Salimans (Google Brain Amsterdam)
Ben Poole (Google Brain)
Jonathan Ho (Google Brain)
More from the Same Authors
-
2021 : Palette: Image-to-Image Diffusion Models »
Chitwan Saharia · William Chan · Huiwen Chang · Chris Lee · Jonathan Ho · Tim Salimans · David Fleet · Mohammad Norouzi -
2021 : Classifier-Free Diffusion Guidance »
Jonathan Ho · Tim Salimans -
2021 : Classifier-Free Diffusion Guidance »
Jonathan Ho · Tim Salimans -
2021 : Palette: Image-to-Image Diffusion Models »
Chitwan Saharia · William Chan · Huiwen Chang · Chris Lee · Jonathan Ho · Tim Salimans · David Fleet · Mohammad Norouzi -
2022 : On Distillation of Guided Diffusion Models »
Chenlin Meng · Ruiqi Gao · Diederik Kingma · Stefano Ermon · Jonathan Ho · Tim Salimans -
2023 Poster: Understanding Diffusion Objectives as the ELBO with Data Augmentation »
Diederik Kingma · Ruiqi Gao -
2023 Oral: Understanding Diffusion Objectives as the ELBO with Data Augmentation »
Diederik Kingma · Ruiqi Gao -
2022 Poster: Video Diffusion Models »
Jonathan Ho · Tim Salimans · Alexey Gritsenko · William Chan · Mohammad Norouzi · David Fleet -
2022 Poster: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding »
Chitwan Saharia · William Chan · Saurabh Saxena · Lala Li · Jay Whang · Remi Denton · Kamyar Ghasemipour · Raphael Gontijo Lopes · Burcu Karagol Ayan · Tim Salimans · Jonathan Ho · David Fleet · Mohammad Norouzi -
2021 Poster: Structured Denoising Diffusion Models in Discrete State-Spaces »
Jacob Austin · Daniel D. Johnson · Jonathan Ho · Daniel Tarlow · Rianne van den Berg -
2020 Poster: ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA »
Ilyes Khemakhem · Ricardo Monti · Diederik Kingma · Aapo Hyvarinen -
2020 Poster: A Spectral Energy Distance for Parallel Speech Synthesis »
Alexey Gritsenko · Tim Salimans · Rianne van den Berg · Jasper Snoek · Nal Kalchbrenner -
2020 Spotlight: ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA »
Ilyes Khemakhem · Ricardo Monti · Diederik Kingma · Aapo Hyvarinen -
2018 Poster: Glow: Generative Flow with Invertible 1x1 Convolutions »
Diederik Kingma · Prafulla Dhariwal -
2017 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Andrew Wilson · Diederik Kingma · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2016 Poster: Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks »
Tim Salimans · Diederik Kingma -
2016 Oral: Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks »
Tim Salimans · Diederik Kingma -
2016 Poster: Generative Adversarial Imitation Learning »
Jonathan Ho · Stefano Ermon -
2016 Poster: Improving Variational Autoencoders with Inverse Autoregressive Flow »
Diederik Kingma · Tim Salimans · Rafal Jozefowicz · Peter Chen · Xi Chen · Ilya Sutskever · Max Welling -
2015 : Variational Auto-Encoders and Extensions »
Diederik Kingma -
2015 Poster: Variational Dropout and the Local Reparameterization Trick »
Diederik Kingma · Tim Salimans · Max Welling -
2014 Poster: Semi-supervised Learning with Deep Generative Models »
Diederik Kingma · Shakir Mohamed · Danilo Jimenez Rezende · Max Welling -
2014 Spotlight: Semi-supervised Learning with Deep Generative Models »
Diederik Kingma · Shakir Mohamed · Danilo Jimenez Rezende · Max Welling -
2010 Poster: Regularized estimation of image statistics by Score Matching »
Diederik Kingma · Yann LeCun