Timezone: »
Recently, diffusion models have shown remarkable results in image synthesis by gradually removing noise and amplifying signals. Although the simple generative process surprisingly works well, is this the best way to generate image data? For instance, despite the fact that human perception is more sensitive to the low-frequencies of an image, diffusion models themselves do not consider any relative importance of each frequency component. Therefore, to incorporate the inductive bias for image data, we propose a novel generative process that synthesizes images in a coarse-to-fine manner. First, we generalize the standard diffusion models by enabling diffusion in a rotated coordinate system with different velocities for each component of the vector. We further propose a blur diffusion as a special case, where each frequency component of an image is diffused at different speeds. Specifically, the proposed blur diffusion consists of a forward process that blurs an image and adds noise gradually, after which a corresponding reverse process deblurs an image and removes noise progressively. Experiments show that proposed model outperforms the previous method in FID on LSUN bedroom and church datasets.
Author Information
Sangyun Lee (Soongsil University)
Hyungjin Chung (KAIST)
Research intern @LANL Ph.D. student @KAIST Deep generative models, Diffusion models, Inverse problems
Jaehyeon Kim (LG AI Research)
Jong Chul Ye (KAIST AI)
More from the Same Authors
-
2023 Poster: Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models »
Geon Yeong Park Park · Jeongsol Kim · Beomsu Kim · Sang Wan Lee · Jong Chul Ye -
2023 Poster: Direct Diffusion Bridge using Data Consistency for Inverse Problems »
Hyungjin Chung · Jeongsol Kim · Jong Chul Ye -
2023 Workshop: NeurIPS 2023 Workshop on Diffusion Models »
Bahjat Kawar · Valentin De Bortoli · Charlotte Bunne · James Thornton · Jiaming Song · Jong Chul Ye · Chenlin Meng -
2022 Poster: Energy-Based Contrastive Learning of Visual Representations »
Beomsu Kim · Jong Chul Ye -
2022 Poster: Improving Diffusion Models for Inverse Problems using Manifold Constraints »
Hyungjin Chung · Byeongsu Sim · Dohoon Ryu · Jong Chul Ye -
2021 Poster: Noise2Score: Tweedie’s Approach to Self-Supervised Image Denoising without Clean Images »
Kwanyoung Kim · Jong Chul Ye -
2021 Poster: Federated Split Task-Agnostic Vision Transformer for COVID-19 CXR Diagnosis »
Sangjoon Park · Gwanghyun Kim · Jeongsol Kim · Boah Kim · Jong Chul Ye -
2021 Poster: Learning Dynamic Graph Representation of Brain Connectome with Spatio-Temporal Attention »
Byung-Hoon Kim · Jong Chul Ye · Jae-Jin Kim -
2020 Poster: Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search »
Jaehyeon Kim · Sungwon Kim · Jungil Kong · Sungroh Yoon -
2020 Oral: Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search »
Jaehyeon Kim · Sungwon Kim · Jungil Kong · Sungroh Yoon