Timezone: »

Can Push-forward Generative Models Fit Multimodal Distributions?
Antoine Salmona · Valentin De Bortoli · Julie Delon · Agnes Desolneux

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #503

Many generative models synthesize data by transforming a standard Gaussian random variable using a deterministic neural network. Among these models are the Variational Autoencoders and the Generative Adversarial Networks. In this work, we call them "push-forward" models and study their expressivity. We formally demonstrate that the Lipschitz constant of these generative networks has to be large in order to fit multimodal distributions. More precisely, we show that the total variation distance and the Kullback-Leibler divergence between the generated and the data distribution are bounded from below by a constant depending on the mode separation and the Lipschitz constant. Since constraining the Lipschitz constants of neural networks is a common way to stabilize generative models, there is a provable trade-off between the ability of push-forward models to approximate multimodal distributions and the stability of their training. We validate our findings on one-dimensional and image datasets and empirically show that the recently introduced diffusion models do not suffer of such limitation.

Author Information

Antoine Salmona (Ecole Normale Superieure Paris Saclay)
Antoine Salmona

3rd year Ph.D. student at Ecole Normale Superieure Paris Saclay in generative modeling theory and optimal transport.

Valentin De Bortoli (ENS Ulm, CNRS)
Julie Delon (Université Paris Cité)
Agnes Desolneux (CNRS)

More from the Same Authors