Timezone: »
In adversarial training, a set of machines learn together by pursuing competing goals. For instance, in Generative Adversarial Networks (GANs, Goodfellow et al., 2014) a generator function learns to synthesize samples that best resemble some dataset, while a discriminator function learns to distinguish between samples drawn from the dataset and samples synthesized by the generator. GANs have emerged as a promising framework for unsupervised learning: GAN generators are able to produce images of unprecedented visual quality, while GAN discriminators learn features with rich semantics that lead to stateoftheart semisupervised learning (Radford et al., 2016). From a conceptual perspective, adversarial training is fascinating because it bypasses the need of loss functions in learning, and opens the door to new ways of regularizing (as well as fooling or attacking) learning machines. In this oneday workshop, we invite scientists and practitioners interested in adversarial training to gather, discuss, and establish new research collaborations. The workshop will feature invited talks, a handson demo, a panel discussion, and contributed spotlights and posters.
Among the research topics to be addressed by the workshop are
* Novel theoretical insights on adversarial training
* New methods and stability improvements for adversarial optimization
* Adversarial training as a proxy to unsupervised learning of representations
* Regularization and attack schemes based on adversarial perturbations
* Adversarial model evaluation
* Adversarial inference models
* Novel applications of adversarial training
Want to learn more? Get started by generating your own MNIST digits using a GAN in 100 lines of Torch: https://goo.gl/Z2leZF
Fri 12:00 a.m.  12:15 a.m.

Set up posters
(Setup)

🔗 
Fri 12:15 a.m.  12:30 a.m.

Welcome
(Talk)
Just a quick introduction to the first NIPS workshop on Adversarial Training. 
David LopezPaz · Alec Radford · Leon Bottou 🔗 
Fri 12:30 a.m.  1:00 a.m.

Introduction to Generative Adversarial Networks
(Talk)
Generative adversarial networks are deep models that learn to generate samples drawn from the same distribution as the training data. As with many deep generative models, the loglikelihood for a GAN is intractable. Unlike most other models, GANs do not require Monte Carlo or variational methods to overcome this intractability. Instead, GANs are trained by seeking a Nash equilibrium in a game played between a discriminator network that attempts to distinguish real data from model samples and a generator network that attempts to fool the discriminator. Stable algorithms for finding Nash equilibria remain an important research direction. Like many other models, GANs can also be applied to semisupervised learning. 
Ian Goodfellow 🔗 
Fri 1:00 a.m.  1:30 a.m.

How to train a GAN?
(Talk)

Soumith Chintala 🔗 
Fri 2:00 a.m.  2:30 a.m.

Learning features to distinguish distributions
(Talk)

Arthur Gretton 🔗 
Fri 2:00 a.m.  2:30 a.m.

Learning features to compare distributions
(Talk)
An important component of GANs is the discriminator, which tells apart samples from the generator and samples from a reference set. Discriminators implement empirical approximations to various divergence measures between probability densities (originally JensenShannon, and more recently other fdivergences and integral probability metrics). If we think about this problem in the setting of hypothesis testing, a good discriminator can tell generator samples from reference samples with high probability: in other words, it maximizes the test power. A reasonable goal then becomes to learn a discriminator to directly maxmize test power (we will briefly look at relations between test power and classifier performance). I will demonstrate ways of training a discriminator with maximum test power using two divergence measures: the maximum mean discrepancy (MMD), and differences of learned smooth features (the ME test, NIPS 2016). In both cases, the key point is that variance matters: it is not enough to have a large empirical divergence; we also need to have high confidence in the value of our divergence. Using an optimized MMD discriminator, we can detect subtle differences in the distribution of GAN outputs and real handwritten digits which humans are unable to find (for instance, small imbalances in the proportions of certain digits, or minor distortions that are implausible in normal handwriting). 
🔗 
Fri 2:30 a.m.  3:00 a.m.

Training Generative Neural Samplers using Variational Divergence
(Talk)
Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generativeadversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any fdivergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models. 
Sebastian Nowozin 🔗 
Fri 3:00 a.m.  5:00 a.m.

Lunch break
(Break)

🔗 
Fri 5:00 a.m.  5:30 a.m.

Adversarially Learned Inference (ALI) and BiGANs
(Talk)
We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network that is trained to distinguish between joint latent/dataspace samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with other recent approaches on the semisupervised SVHN task. 
Aaron Courville 🔗 
Fri 5:30 a.m.  6:00 a.m.

EnergyBased Adversarial Training and Video Prediction
(Talk)

Yann LeCun 🔗 
Fri 6:00 a.m.  7:00 a.m.

Discussion panel
Submit your questions to https://www.reddit.com/r/MachineLearning/comments/5fm66i/dnips2016askaworkshopanything_adversarial/ 
Ian Goodfellow · Soumith Chintala · Arthur Gretton · Sebastian Nowozin · Aaron Courville · Yann LeCun · Emily Denton 🔗 
Fri 7:00 a.m.  7:30 a.m.

Coffee break
(Break)

🔗 
Fri 7:30 a.m.  9:00 a.m.

Spotlight presentations
(Talk)
David Pfau and Oriol Vinyals. Connecting Generative Adversarial Networks and ActorCritic Methods Shakir Mohamed and Balaji Lakshminarayanan. Learning in Implicit Generative Models Guim Perarnau, Joost Van De Weijer, Bogdan Raducanu and Jose M. Álvarez. Invertible Conditional GANs for image editing Augustus Odena, Christopher Olah and Jonathon Shlens. Conditional Image Synthesis with Auxiliary Classifier GANs Luke Metz, Ben Poole, David Pfau and Jascha SohlDickstein. Unrolled Generative Adversarial Networks Chelsea Finn, Paul Christiano, Pieter Abbeel and Sergey Levine. A Connection Between Generative Adversarial Networks, Inverse Reinforcement Learning, and EnergyBased Models Pauline Luc, Camille Couprie, Soumith Chintala and Jakob Verbeek. Semantic Segmentation using Adversarial Networks Tarik Arici and Asli Celikyilmaz. Associative Adversarial Networks Nina Narodytska and Shiva Kasiviswanathan. Simple BlackBox Adversarial Perturbations for Deep Networks Pedro Tabacof, Julia Tavares and Eduardo Valle. Adversarial Images for Variational Autoencoders Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov and Roger Grosse. On the Quantitative Analysis of DecoderBased Generative Models Takeru Miyato, Andrew Dai and Ian Goodfellow. Adversarial Training Methods for SemiSupervised Text Classification 
🔗 
Fri 9:00 a.m.  1:00 p.m.

Poster session
The posters will be up and running from the beginning of the day, and accessible during all breaks. However, from this point in time we leave the room for their dedicated exposition and discussion. Browse the list of papers at https://sites.google.com/site/nips2016adversarial/home/acceptedpapers 
🔗 
Fri 9:00 a.m.  1:00 p.m.

Additional poster and open discussions
(Poster session)
The posters will be up and running from the beginning of the day, and accessible during all breaks. However, from this point in time we leave the room for their dedicated exposition and discussion. Browse the list of papers at https://sites.google.com/site/nips2016adversarial/home/acceptedpapers 
🔗 
Author Information
David LopezPaz (Facebook AI Research)
Leon Bottou (Facebook AI Research)
Léon Bottou received a Diplôme from l'Ecole Polytechnique, Paris in 1987, a Magistère en Mathématiques Fondamentales et Appliquées et Informatiques from Ecole Normale Supérieure, Paris in 1988, and a PhD in Computer Science from Université de ParisSud in 1991. He joined AT&T Bell Labs from 1991 to 1992 and AT&T Labs from 1995 to 2002. Between 1992 and 1995 he was chairman of Neuristique in Paris, a small company pioneering machine learning for data mining applications. He has been with NEC Labs America in Princeton since 2002. Léon's primary research interest is machine learning. His contributions to this field address theory, algorithms and large scale applications. Léon's secondary research interest is data compression and coding. His best known contribution in this field is the DjVu document compression technology (http://www.djvu.org.) Léon published over 70 papers and is serving on the boards of JMLR and IEEE TPAMI. He also serves on the scientific advisory board of Kxen Inc .
Alec Radford (OpenAI)
More from the Same Authors

2021 : On the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou 
2021 : On the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou 
2021 : Poster: Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou 
2022 : Pretrain, finetune, interpolate: a threestage strategy for domain generalization »
Alexandre Rame · Jianyu Zhang · Leon Bottou · David LopezPaz 
2022 Workshop: INTERPOLATE — First Workshop on Interpolation Regularizers and Beyond »
Yann Dauphin · David LopezPaz · Vikas Verma · Boyi Li 
2022 Poster: The Effects of Regularization and Data Augmentation are Class Dependent »
Randall Balestriero · Leon Bottou · Yann LeCun 
2021 : Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou 
2021 Poster: An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers »
Ramakrishna Vedantam · David LopezPaz · David Schwab 
2020 Poster: Learning to summarize with human feedback »
Nisan Stiennon · Long Ouyang · Jeffrey Wu · Daniel Ziegler · Ryan Lowe · Chelsea Voss · Alec Radford · Dario Amodei · Paul Christiano 
2020 Poster: Language Models are FewShot Learners »
Tom B Brown · Benjamin Mann · Nick Ryder · Melanie Subbiah · Jared Kaplan · Prafulla Dhariwal · Arvind Neelakantan · Pranav Shyam · Girish Sastry · Amanda Askell · Sandhini Agarwal · Ariel HerbertVoss · Gretchen M Krueger · Tom Henighan · Rewon Child · Aditya Ramesh · Daniel Ziegler · Jeffrey Wu · Clemens Winter · Chris Hesse · Mark Chen · Eric Sigler · Mateusz Litwin · Scott Gray · Benjamin Chess · Jack Clark · Christopher Berner · Sam McCandlish · Alec Radford · Ilya Sutskever · Dario Amodei 
2020 Oral: Language Models are FewShot Learners »
Tom B Brown · Benjamin Mann · Nick Ryder · Melanie Subbiah · Jared Kaplan · Prafulla Dhariwal · Arvind Neelakantan · Pranav Shyam · Girish Sastry · Amanda Askell · Sandhini Agarwal · Ariel HerbertVoss · Gretchen M Krueger · Tom Henighan · Rewon Child · Aditya Ramesh · Daniel Ziegler · Jeffrey Wu · Clemens Winter · Chris Hesse · Mark Chen · Eric Sigler · Mateusz Litwin · Scott Gray · Benjamin Chess · Jack Clark · Christopher Berner · Sam McCandlish · Alec Radford · Ilya Sutskever · Dario Amodei 
2019 : Transfer Learning for Text Generation »
Alec Radford 
2019 Poster: Learning about an exponential amount of conditional distributions »
Mohamed Ishmael Belghazi · Maxime Oquab · David LopezPaz 
2019 Poster: Cold Case: The Lost MNIST Digits »
Chhavi Yadav · Leon Bottou 
2019 Spotlight: Cold Case: The Lost MNIST Digits »
Chhavi Yadav · Leon Bottou 
2019 Poster: SingleModel Uncertainties for Deep Learning »
Nataša Tagasovska · David LopezPaz 
2018 : Opening Remarks »
David LopezPaz 
2018 Workshop: Causal Learning »
Martin Arjovsky · Christina HeinzeDeml · Anna Klimovskaia · Maxime Oquab · Leon Bottou · David LopezPaz 
2018 Workshop: Smooth Games Optimization and Machine Learning »
Simon LacosteJulien · Ioannis Mitliagkas · Gauthier Gidel · Vasilis Syrgkanis · Eva Tardos · Leon Bottou · Sebastian Nowozin 
2018 Poster: SING: SymboltoInstrument Neural Generator »
Alexandre Defossez · Neil Zeghidour · Nicolas Usunier · Leon Bottou · Francis Bach 
2017 : Geometrical Insights for Unsupervised Learning »
Leon Bottou 
2017 : Looking for a Missing Signal »
Leon Bottou 
2017 Poster: Gradient Episodic Memory for Continual Learning »
David LopezPaz · Marc'Aurelio Ranzato 
2016 : Welcome »
David LopezPaz · Alec Radford · Leon Bottou 
2016 Poster: Improved Techniques for Training GANs »
Tim Salimans · Ian Goodfellow · Wojciech Zaremba · Vicki Cheung · Alec Radford · Peter Chen · Xi Chen 
2015 Workshop: Optimization for Machine Learning (OPT2015) »
Suvrit Sra · Alekh Agarwal · Leon Bottou · Sashank J. Reddi 
2014 Workshop: Modern Nonparametrics 3: Automating the Learning Pipeline »
Eric Xing · Mladen Kolar · Arthur Gretton · Samory Kpotufe · Han Liu · Zoltán Szabó · Alan Yuille · Andrew G Wilson · Ryan Tibshirani · Sasha Rakhlin · Damian Kozbur · Bharath Sriperumbudur · David LopezPaz · Kirthevasan Kandasamy · Francesco Orabona · Andreas Damianou · Wacha Bounliphone · Yanshuai Cao · Arijit Das · Yingzhen Yang · Giulia DeSalvo · Dmitry Storcheus · Roberto Valerio 
2014 Workshop: Learning Semantics »
Cedric Archambeau · Antoine Bordes · Leon Bottou · Chris J Burges · David Grangier 
2014 Workshop: Deep Learning and Representation Learning »
Andrew Y Ng · Yoshua Bengio · Adam Coates · Roland Memisevic · Sharanyan Chetlur · Geoffrey E Hinton · Shamim Nemati · Bryan Catanzaro · Surya Ganguli · Herbert Jaeger · Phil Blunsom · Leon Bottou · Volodymyr Mnih · ChenYu Lee · Rich M Schwartz 
2013 Workshop: Randomized Methods for Machine Learning »
David LopezPaz · Quoc V Le · Alexander Smola 
2013 Workshop: NIPS 2013 Workshop on Causality: Largescale Experiment Design and Inference of Causal Mechanisms »
Isabelle Guyon · Leon Bottou · Bernhard Schölkopf · Alexander Statnikov · Evelyne Viegas · james m robins 
2013 Poster: The Randomized Dependence Coefficient »
David LopezPaz · Philipp Hennig · Bernhard Schölkopf 
2012 Poster: SemiSupervised Domain Adaptation with NonParametric Copulas »
David LopezPaz · José Miguel HernándezLobato · Bernhard Schölkopf 
2012 Spotlight: SemiSupervised Domain Adaptation with NonParametric Copulas »
David LopezPaz · José Miguel HernándezLobato · Bernhard Schölkopf 
2011 Workshop: Learning Semantics »
Antoine Bordes · Jason E Weston · Ronan Collobert · Leon Bottou 
2007 Tutorial: Learning Using Many Examples »
Leon Bottou · Andrew W Moore 
2007 Poster: The Tradeoffs of Large Scale Learning »
Leon Bottou · Olivier Bousquet