Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Thu Dec 08 11:00 PM -- 09:30 AM (PST) @ Area 3
Adversarial Training
David Lopez-Paz · Leon Bottou · Alec Radford





Workshop Home Page

In adversarial training, a set of machines learn together by pursuing competing goals. For instance, in Generative Adversarial Networks (GANs, Goodfellow et al., 2014) a generator function learns to synthesize samples that best resemble some dataset, while a discriminator function learns to distinguish between samples drawn from the dataset and samples synthesized by the generator. GANs have emerged as a promising framework for unsupervised learning: GAN generators are able to produce images of unprecedented visual quality, while GAN discriminators learn features with rich semantics that lead to state-of-the-art semi-supervised learning (Radford et al., 2016). From a conceptual perspective, adversarial training is fascinating because it bypasses the need of loss functions in learning, and opens the door to new ways of regularizing (as well as fooling or attacking) learning machines. In this one-day workshop, we invite scientists and practitioners interested in adversarial training to gather, discuss, and establish new research collaborations. The workshop will feature invited talks, a hands-on demo, a panel discussion, and contributed spotlights and posters.

Among the research topics to be addressed by the workshop are

* Novel theoretical insights on adversarial training
* New methods and stability improvements for adversarial optimization
* Adversarial training as a proxy to unsupervised learning of representations
* Regularization and attack schemes based on adversarial perturbations
* Adversarial model evaluation
* Adversarial inference models
* Novel applications of adversarial training

Want to learn more? Get started by generating your own MNIST digits using a GAN in 100 lines of Torch: https://goo.gl/Z2leZF

Set up posters (Setup)
Welcome (Talk)
Introduction to Generative Adversarial Networks (Talk)
How to train a GAN? (Talk)
Learning features to compare distributions (Talk)
Learning features to distinguish distributions (Talk)
Training Generative Neural Samplers using Variational Divergence (Talk)
Lunch break (Break)
Adversarially Learned Inference (ALI) and BiGANs (Talk)
Energy-Based Adversarial Training and Video Prediction (Talk)
Discussion panel
Coffee break (Break)
Spotlight presentations (Talk)
Poster session
Additional poster and open discussions (Poster session)