Timezone: »

 
Poster
A Unified View of cGANs with and without Classifiers
Si-An Chen · Chun-Liang Li · Hsuan-Tien Lin

Thu Dec 09 04:30 PM -- 06:00 PM (PST) @

Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions. Existing cGANs are based on a wide range of different discriminator designs and training objectives. One popular design in earlier works is to include a classifier during training with the assumption that good classifiers can help eliminate samples generated with wrong classes. Nevertheless, including classifiers in cGANs often comes with a side effect of only generating easy-to-classify samples. Recently, some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers. Somehow it remains unanswered whether the classifiers can be resurrected to design better cGANs. In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs. We start by using the decomposition of the joint probability distribution to connect the goals of cGANs and classification as a unified framework. The framework, along with a classic energy model to parameterize distributions, justifies the use of classifiers for cGANs in a principled manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations, which provides a unified view and brings new insights to understanding cGANs. Experimental results demonstrate that the design inspired by the proposed framework outperforms state-of-the-art cGANs on multiple benchmark datasets, especially on the most challenging ImageNet. The code is available at https://github.com/sian-chen/PyTorch-ECGAN.

Author Information

Si-An Chen (National Taiwan University)
Chun-Liang Li (Google)
Hsuan-Tien Lin (National Taiwan University)
Hsuan-Tien Lin

Professor Hsuan-Tien Lin received a B.S. in Computer Science and Information Engineering from National Taiwan University in 2001, an M.S. and a Ph.D. in Computer Science from California Institute of Technology in 2005 and 2008, respectively. He joined the Department of Computer Science and Information Engineering at National Taiwan University as an assistant professor in 2008 and has been promoted to full professor in 2017. Between 2016 and 2019, he worked as the Chief Data Scientist of Appier, a startup company that specializes in making AI easier for marketing. Currently, he keeps growing with Appier as its Chief Data Science Consultant. From the university, Prof. Lin received the Distinguished Teaching Awards in 2011 and 2021, the Outstanding Mentoring Award in 2013, and five Outstanding Teaching Awards between 2016 and 2020. He co-authored the introductory machine learning textbook Learning from Data and offered two popular Mandarin-teaching MOOCs Machine Learning Foundations and Machine Learning Techniques based on the textbook. He served in the machine learning community as Progam Co-chair of NeurIPS 2020, Expo Co-chair of ICML 2021, and Workshop Chair of NeurIPS 2022 and 2023. He co-led the teams that won the champion of KDDCup 2010, the double-champion of the two tracks in KDDCup 2011, the champion of track 2 in KDDCup 2012, and the double-champion of the two tracks in KDDCup 2013.

More from the Same Authors