Skip to yearly menu bar Skip to main content


Poster

Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game

Ngoc-Trung Tran · Viet-Hung Tran · Bao-Ngoc Nguyen · Linxiao Yang · Ngai-Man (Man) Cheung

East Exhibition Hall B + C #125

Keywords: [ Adversarial Networks ] [ Deep Learning ] [ Generative Models ]


Abstract: Self-supervised (SS) learning is a powerful approach for representation learning using unlabeled data. Recently, it has been applied to Generative Adversarial Networks (GAN) training. Specifically, SS tasks were proposed to address the catastrophic forgetting issue in the GAN discriminator. In this work, we perform an in-depth analysis to understand how SS tasks interact with learning of generator. From the analysis, we identify issues of SS tasks which allow a severely mode-collapsed generator to excel the SS tasks. To address the issues, we propose new SS tasks based on a multi-class minimax game. The competition between our proposed SS tasks in the game encourages the generator to learn the data distribution and generate diverse samples. We provide both theoretical and empirical analysis to support that our proposed SS tasks have better convergence property. We conduct experiments to incorporate our proposed SS tasks into two different GAN baseline models. Our approach establishes state-of-the-art FID scores on CIFAR-10, CIFAR-100, STL-10, CelebA, Imagenet $32\times32$ and Stacked-MNIST datasets, outperforming existing works by considerable margins in some cases. Our unconditional GAN model approaches performance of conditional GAN without using labeled data. Our code: \url{https://github.com/tntrung/msgan}

Live content is unavailable. Log in and register to view live content