Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Smooth Games Optimization and Machine Learning

An interpretation of GANs via online learning and game theory

Paulina Grnarova


Abstract:

Generative Adversarial Networks (GANs) have become one of the most powerful paradigms in learning real-world distributions. Despite this success, their minimax nature makes them fundamentally different to more classical generative models thus raising novel challenges; most notably in terms of training and evaluation. Indeed, finding a saddle-point is in general a harder task than converging to an extremum. We view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building upon ideas from online learning and game theory, we propose (i) a novel training method with provable convergence to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one layer network and the generator is an arbitrary network and (ii) a natural metric for detecting non-convergence, namely the duality gap.

Live content is unavailable. Log in and register to view live content