Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: The Symbiosis of Deep Learning and Differential Equations -- III

ELeGANt: An Euler-Lagrange Analysis of Wasserstein Generative Adversarial Networks

Siddarth Asokan · Chandra Seelamantula

Keywords: [ Euler-Lagrange condition ] [ Poisson PDE ] [ Optimal GAN discriminator ] [ generative adversarial networks ] [ Fourier series ]

[ ] [ Project Page ]
Sat 16 Dec noon PST — 12:15 p.m. PST

Abstract:

We consider Wasserstein generative adversarial networks (WGAN) with a gradient-norm penalty and analyze the underlying {\it functional} optimization problem within a variational setting. The optimal discriminator in this setting is the solution to a Poisson differential equation, and can be obtained in closed form without having to train a neural network. We illustrate this by employing a Fourier-series approximation to solve the Poisson differential equation. Experimental results based on synthesized low-dimensional Gaussian data demonstrate superior convergence behavior of the proposed approach in comparison with the baseline WGAN variants that employ weight-clipping, gradient or Lipschitz penalties on the discriminator. Further, within this setting, the optimal Lagrange multiplier can be computed in closed-form, and serves as a proxy for measuring GAN generator convergence. This work is an extended abstract, summarizing Asokan and Seelamantula (2023).

Chat is not available.