Timezone: »
Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256x256 pixels. The source code is publicly available.
Author Information
Arash Vahdat (NVIDIA Research)
Jan Kautz (NVIDIA)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: NVAE: A Deep Hierarchical Variational Autoencoder »
Fri. Dec 11th 03:20 -- 03:30 AM Room Orals & Spotlights: Neuroscience/Probabilistic
More from the Same Authors
-
2021 : Physics Informed RNN-DCT Networks for Time-Dependent Partial Differential Equations »
Benjamin Wu · Oliver Hennigh · Jan Kautz · Sanjay Choudhry · Wonmin Byeon -
2022 : Dynamic-backbone protein-ligand structure prediction with multiscale generative diffusion models »
Zhuoran Qiao · Weili Nie · Arash Vahdat · Thomas Miller · Anima Anandkumar -
2022 : Fast Sampling of Diffusion Models via Operator Learning »
Hongkai Zheng · Weili Nie · Arash Vahdat · Kamyar Azizzadenesheli · Anima Anandkumar -
2023 Poster: Generalizable One-shot Neural Head Avatar »
Xueting Li · Shalini De Mello · Sifei Liu · Koki Nagano · Umar Iqbal · Jan Kautz -
2023 Poster: Convolutional State Space Models for Long-Range Spatiotemporal Modeling »
Jimmy Smith · Shalini De Mello · Jan Kautz · Scott Linderman · Wonmin Byeon -
2022 : Dynamic-backbone protein-ligand structure prediction with multiscale generative diffusion models »
Zhuoran Qiao · Weili Nie · Arash Vahdat · Thomas Miller · Anima Anandkumar -
2022 Workshop: NeurIPS 2022 Workshop on Score-Based Methods »
Yingzhen Li · Yang Song · Valentin De Bortoli · Francois-Xavier Briol · Wenbo Gong · Alexia Jolicoeur-Martineau · Arash Vahdat -
2022 Poster: GENIE: Higher-Order Denoising Diffusion Solvers »
Tim Dockhorn · Arash Vahdat · Karsten Kreis -
2022 Poster: LION: Latent Point Diffusion Models for 3D Shape Generation »
xiaohui zeng · Arash Vahdat · Francis Williams · Zan Gojcic · Or Litany · Sanja Fidler · Karsten Kreis -
2021 Poster: A Contrastive Learning Approach for Training Variational Autoencoder Priors »
Jyoti Aneja · Alex Schwing · Jan Kautz · Arash Vahdat -
2021 Poster: Score-based Generative Modeling in Latent Space »
Arash Vahdat · Karsten Kreis · Jan Kautz -
2021 Poster: Controllable and Compositional Generation with Latent-Space Energy-Based Models »
Weili Nie · Arash Vahdat · Anima Anandkumar -
2021 Poster: Don’t Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence »
Tianshi Cao · Alex Bie · Arash Vahdat · Sanja Fidler · Karsten Kreis -
2021 Poster: Coupled Segmentation and Edge Learning via Dynamic Graph Propagation »
Zhiding Yu · Rui Huang · Wonmin Byeon · Sifei Liu · Guilin Liu · Thomas Breuel · Anima Anandkumar · Jan Kautz -
2020 Poster: Online Adaptation for Consistent Mesh Reconstruction in the Wild »
Xueting Li · Sifei Liu · Shalini De Mello · Kihwan Kim · Xiaolong Wang · Ming-Hsuan Yang · Jan Kautz -
2020 Poster: Convolutional Tensor-Train LSTM for Spatio-Temporal Learning »
Jiahao Su · Wonmin Byeon · Jean Kossaifi · Furong Huang · Jan Kautz · Anima Anandkumar -
2020 Poster: On the distance between two neural networks and the stability of learning »
Jeremy Bernstein · Arash Vahdat · Yisong Yue · Ming-Yu Liu -
2019 Poster: Few-shot Video-to-Video Synthesis »
Ting-Chun Wang · Ming-Yu Liu · Andrew Tao · Guilin Liu · Bryan Catanzaro · Jan Kautz -
2019 Poster: Joint-task Self-supervised Learning for Temporal Correspondence »
Xueting Li · Sifei Liu · Shalini De Mello · Xiaolong Wang · Jan Kautz · Ming-Hsuan Yang -
2019 Poster: Dancing to Music »
Hsin-Ying Lee · Xiaodong Yang · Ming-Yu Liu · Ting-Chun Wang · Yu-Ding Lu · Ming-Hsuan Yang · Jan Kautz -
2018 : Jan Kautz »
Jan Kautz -
2018 Poster: Context-aware Synthesis and Placement of Object Instances »
Donghoon Lee · Sifei Liu · Jinwei Gu · Ming-Yu Liu · Ming-Hsuan Yang · Jan Kautz -
2018 Poster: Video-to-Video Synthesis »
Ting-Chun Wang · Ming-Yu Liu · Jun-Yan Zhu · Guilin Liu · Andrew Tao · Jan Kautz · Bryan Catanzaro -
2018 Poster: DVAE#: Discrete Variational Autoencoders with Relaxed Boltzmann Priors »
Arash Vahdat · Evgeny Andriyash · William Macready -
2017 : Poster Session (encompasses coffee break) »
Beidi Chen · Borja Balle · Daniel Lee · iuri frosio · Jitendra Malik · Jan Kautz · Ke Li · Masashi Sugiyama · Miguel A. Carreira-Perpinan · Ramin Raziperchikolaei · Theja Tulabandhula · Yung-Kyun Noh · Adams Wei Yu -
2017 Poster: Unsupervised Image-to-Image Translation Networks »
Ming-Yu Liu · Thomas Breuel · Jan Kautz -
2017 Spotlight: Unsupervised Image-to-Image Translation Networks »
Ming-Yu Liu · Thomas Breuel · Jan Kautz -
2017 Poster: Learning Affinity via Spatial Propagation Networks »
Sifei Liu · Shalini De Mello · Jinwei Gu · Guangyu Zhong · Ming-Hsuan Yang · Jan Kautz -
2017 Poster: Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks »
Arash Vahdat