Timezone: »
Diffusion-based Deep Generative Models (DDGMs) offer state-of-the-art performance in generative modeling. Their main strength comes from their unique setup in which a model (the backward diffusion process) is trained to reverse the forward diffusion process, which gradually adds noise to the input signal. Although DDGMs are well studied, it is still unclear how the small amount of noise is transformed during the backward diffusion process. Here, we focus on analyzing this problem to gain more insight into the behavior of DDGMs and their denoising and generative capabilities. We observe a fluid transition point that changes the functionality of the backward diffusion process from generating a (corrupted) image from noise to denoising the corrupted image to the final sample. Based on this observation, we postulate to divide a DDGM into two parts: a denoiser and a generator. The denoiser could be parameterized by a denoising auto-encoder, while the generator is a diffusion-based model with its own set of parameters. We experimentally validate our proposition, showing its pros and cons.
Author Information
Kamil Deja (Institute of Computer Science, Warsaw University of Technology, Nowowiejska 15/19 00-665 Warszawa, NIP: 5250005834)
Anna Kuzina (VU Amsterdam)
Tomasz Trzcinski (Warsaw University of Technology, Tooploox, IDEAS, Jagiellonian University)
Jakub Tomczak (Vrije Universiteit Amsterdam)
More from the Same Authors
-
2021 : Semi-supervised Multiple Instance Learning using Variational Auto-Encoders »
Ali Nihat Uzunalioglu · Tameem Adel · Jakub M. Tomczak -
2021 : Semi-supervised Multiple Instance Learning using Variational Auto-Encoders »
Ali Nihat Uzunalioglu · Tameem Adel · Jakub M. Tomczak -
2022 : Diversity Balancing Generative Adversarial Networks for fast simulation of the Zero Degree Calorimeter in the ALICE experiment at CERN »
Jan Dubiński · Kamil Deja · Sandro Wenzel · Przemysław Rokita · Tomasz Trzcinski -
2022 : Kendall Shape-VAE : Learning Shapes in a Generative Framework »
Sharvaree Vadgama · Jakub Tomczak · Erik Bekkers -
2023 Poster: The Tunnel Effect: Building Data Representations in Deep Neural Networks »
Wojciech Masarczyk · Mateusz Ostaszewski · Ehsan Imani · Razvan Pascanu · Piotr Miłoś · Tomasz Trzcinski -
2023 Poster: A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic Inference »
Emile van Krieken · Thiviyan Thanapalasingam · Jakub Tomczak · Frank van Harmelen · Annette Ten Teije -
2023 Poster: Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders »
Jan Dubiński · Stanisław Pawlak · Franziska Boenisch · Tomasz Trzcinski · Adam Dziedzic -
2022 Spotlight: Alleviating Adversarial Attacks on Variational Autoencoders with MCMC »
Anna Kuzina · Max Welling · Jakub Tomczak -
2022 : Kendall Shape-VAE : Learning Shapes in a Generative Framework »
Sharvaree Vadgama · Jakub Tomczak · Erik Bekkers -
2022 Poster: Alleviating Adversarial Attacks on Variational Autoencoders with MCMC »
Anna Kuzina · Max Welling · Jakub Tomczak -
2022 Poster: FlowHMM: Flow-based continuous hidden Markov models »
Pawel Lorek · Rafal Nowak · Tomasz Trzcinski · Maciej Zieba -
2021 Poster: Invertible DenseNets with Concatenated LipSwish »
Yura Perugachi-Diaz · Jakub Tomczak · Sandjai Bhulai -
2021 Poster: Storchastic: A Framework for General Stochastic Automatic Differentiation »
Emile van Krieken · Jakub Tomczak · Annette Ten Teije -
2021 Poster: BooVAE: Boosting Approach for Continual Learning of VAE »
Evgenii Egorov · Anna Kuzina · Evgeny Burnaev -
2020 Poster: The Convolution Exponential and Generalized Sylvester Flows »
Emiel Hoogeboom · Victor Garcia Satorras · Jakub Tomczak · Max Welling -
2019 Poster: Combinatorial Bayesian Optimization using the Graph Cartesian Product »
Changyong Oh · Jakub Tomczak · Stratis Gavves · Max Welling