Timezone: »

 
Oral
Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation
Diederik Kingma · Ruiqi Gao

Wed Dec 13 08:00 AM -- 08:15 AM (PST) @

To achieve the highest perceptual quality, state-of-the-art diffusion models are optimized with objectives that typically look very different from the maximum likelihood and the Evidence Lower Bound (ELBO) objectives. In this work, we reveal that diffusion model objectives are actually closely related to the ELBO.Specifically, we show that all commonly used diffusion model objectives equate to a weighted integral of ELBOs over different noise levels, where the weighting depends on the specific objective used. Under the condition of monotonic weighting, the connection is even closer: the diffusion objective then equals the ELBO, combined with simple data augmentation, namely Gaussian noise perturbation. We show that this condition holds for a number of state-of-the-art diffusion models. In experiments, we explore new monotonic weightings and demonstrate their effectiveness, achieving state-of-the-art FID scores on the high-resolution ImageNet benchmark.

Author Information

Diederik Kingma (Google)
Diederik Kingma

I do research on scalable methods for machine learning, with a focus on generative models. My contributions include the Variational Autoencoder (VAE), the Adam optimizer, Glow, and Variational Diffusion Models, but please see Scholar for a more complete list. I obtained a PhD (cum laude) from University of Amsterdam in 2017, and was part of the founding team of OpenAI in 2015. Currently I work at Google DeepMind.

Ruiqi Gao (Google DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors