Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Adversarial Training

Introduction to Generative Adversarial Networks

Ian Goodfellow

[ ]
2016 Talk

Abstract:

Generative adversarial networks are deep models that learn to generate samples drawn from the same distribution as the training data. As with many deep generative models, the log-likelihood for a GAN is intractable. Unlike most other models, GANs do not require Monte Carlo or variational methods to overcome this intractability. Instead, GANs are trained by seeking a Nash equilibrium in a game played between a discriminator network that attempts to distinguish real data from model samples and a generator network that attempts to fool the discriminator. Stable algorithms for finding Nash equilibria remain an important research direction. Like many other models, GANs can also be applied to semi-supervised learning.

Live content is unavailable. Log in and register to view live content