Generative adversarial networks (GANs) are a recently introduced class of generative models, designed to produce realistic samples. This tutorial is intended to be accessible to an audience who has no experience with GANs, and should prepare the audience to make original research contributions applying GANs or improving the core GAN algorithms. GANs are universal approximators of probability distributions. Such models generally have an intractable log-likelihood gradient, and require approximations such as Markov chain Monte Carlo or variational lower bounds to make learning feasible. GANs avoid using either of these classes of approximations. The learning process consists of a game between two adversaries: a generator network that attempts to produce realistic samples, and a discriminator network that attempts to identify whether samples originated from the training data or from the generative model. At the Nash equilibrium of this game, the generator network reproduces the data distribution exactly, and the discriminator network cannot distinguish samples from the model from training data. Both networks can be trained using stochastic gradient descent with exact gradients computed by maximum likelihood.
Topics include: - An introduction to the basics of GANs. - A review of work applying GANs to large image generation. - Extending the GAN framework to approximate maximum likelihood, rather than minimizing the Jensen-Shannon divergence. - Improved model architectures that yield better learning in GANs. - Semi-supervised learning with GANs. - Research frontiers, including guaranteeing convergence of the GAN game. - Other applications of adversarial learning, such as domain adaptation and privacy.
Learning objectives: - To explain the fundamentals of how GANs work to someone who has not heard of them previously - To bring the audience up to date on image generation applications of GANs - To prepare the audience to make original contributions to generative modeling research
Target audience: The target audience is people who are interested in generative modeling. Both people who do not have prior knowledge of GANs and people who do should find something worthwhile, but the first part of the tutorial will be less interesting to people who have prior knowledge of GANs.
Ian Goodfellow (OpenAI)
Ian Goodfellow is a research scientist at OpenAI. He obtained a B.Sc. and M.Sc. from Stanford University in 2009. He worked on the Stanford AI Robot and interned at Willow Garage before beginning to study deep learning under the direction of Andrew Ng. He completed a PhD co-supervised by Yoshua Bengio and Aaron Courville in 2014. He invented generative adversarial networks shortly after completing his thesis and shortly before joining Google Brain. At Google, he co-developed an end-to-end deep learning system for recognizing addresses in Street View, studied machine learning security and privacy, and co-authored the MIT Press textbook, Deep Learning. In 2016 he left Google to join OpenAI, a non-profit whose machine is to build safe AI for the benefit of everyone.
More from the Same Authors
2016 : Panel Discussion »
Shakir Mohamed · David Blei · Ryan Adams · José Miguel Hernández-Lobato · Ian Goodfellow · Yarin Gal
2016 : Adversarial Approaches to Bayesian Learning and Bayesian Approaches to Adversarial Robustness »
2016 : Discussion panel »
Ian Goodfellow · Soumith Chintala · Arthur Gretton · Sebastian Nowozin · Aaron Courville · Yann LeCun · Emily Denton
2016 : Adversarial Examples and Adversarial Training »
2016 : Introduction to Generative Adversarial Networks »
2016 Poster: Improved Techniques for Training GANs »
Tim Salimans · Ian Goodfellow · Wojciech Zaremba · Vicki Cheung · Alec Radford · Peter Chen · Xi Chen