Timezone: »
Generative adversarial network (GAN) is a minimax game between a generator mimicking the true model and a discriminator distinguishing the samples produced by the generator from the real training samples. Given an unconstrained discriminator able to approximate any function, this game reduces to finding the generative model minimizing a divergence measure, e.g. the Jensen-Shannon (JS) divergence, to the data distribution. However, in practice the discriminator is constrained to be in a smaller class F such as neural nets. Then, a natural question is how the divergence minimization interpretation changes as we constrain F. In this work, we address this question by developing a convex duality framework for analyzing GANs. For a convex set F, this duality framework interprets the original GAN formulation as finding the generative model with minimum JS-divergence to the distributions penalized to match the moments of the data distribution, with the moments specified by the discriminators in F. We show that this interpretation more generally holds for f-GAN and Wasserstein GAN. As a byproduct, we apply the duality framework to a hybrid of f-divergence and Wasserstein distance. Unlike the f-divergence, we prove that the proposed hybrid divergence changes continuously with the generative model, which suggests regularizing the discriminator's Lipschitz constant in f-GAN and vanilla GAN. We numerically evaluate the power of the suggested regularization schemes for improving GAN's training performance.
Author Information
Farzan Farnia (Stanford University)
David Tse (Stanford University)
More from the Same Authors
-
2022 Poster: Beyond the Best: Distribution Functional Estimation in Infinite-Armed Bandits »
Yifei Wang · Tavor Baharav · Yanjun Han · Jiantao Jiao · David Tse -
2019 Poster: Ultra Fast Medoid Identification via Correlated Sequential Halving »
Tavor Baharav · David Tse -
2018 Poster: Porcupine Neural Networks: Approximating Neural Network Landscapes »
Soheil Feizi · Hamid Javadi · Jesse Zhang · David Tse -
2017 Poster: Tensor Biclustering »
Soheil Feizi · Hamid Javadi · David Tse -
2017 Poster: NeuralFDR: Learning Discovery Thresholds from Hypothesis Features »
Fei Xia · Martin J Zhang · James Zou · David Tse -
2016 Poster: A Minimax Approach to Supervised Learning »
Farzan Farnia · David Tse -
2015 Poster: Discrete Rényi Classifiers »
Meisam Razaviyayn · Farzan Farnia · David Tse