Timezone: »

Stabilizing Training of Generative Adversarial Networks through Regularization
Kevin Roth · Aurelien Lucchi · Sebastian Nowozin · Thomas Hofmann

Mon Dec 04 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #107

Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f -divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning.

Author Information

Kevin Roth (ETH)
Aurelien Lucchi (ETH Zurich)
Sebastian Nowozin (Microsoft Research Cambridge)
Thomas Hofmann (ETH Zurich)

More from the Same Authors