Timezone: »

 
Poster
Bayesian GAN
Yunus Saatci · Andrew Wilson

Wed Dec 06 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #112

Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs. Within this framework, we use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.

Author Information

Yunus Saatci (Uber AI Labs)
Andrew Wilson (Cornell University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors