Timezone: »

 
Improving Model Compatibility of Generative Adversarial Networks by Boundary Calibration
Si-An Chen · Chun-Liang Li · Hsuan-Tien Lin
Event URL: https://openreview.net/forum?id=6i3_eKHHzpz »

Generative Adversarial Networks (GANs) is a powerful family of models that learn an underlying distribution to generate synthetic data. Many existing studies of GANs focus on improving the realness of the generated image data for visual applications, and few of them concern about improving the quality of the generated data for training other classifiers---a task known as the model compatibility problem. As a consequence, existing GANs often prefer generating `easier' synthetic data that are far from the boundaries of the classifiers, and refrain from generating near-boundary data, which are known to play an important roles in training the classifiers. To improve GAN in terms of model compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage the boundary information from a set of pre-trained classifiers using the original data. In particular, we introduce an auxiliary Boundary-Calibration loss (BC-loss) into the generator of GAN to match the statistics between the posterior distributions of original data and generated data with respect to the boundaries of the pre-trained classifiers. The BC-loss is provably unbiased and can be easily coupled with different GAN variants to improve their model compatibility. Experimental results demonstrate that BCGANs not only generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.

Author Information

Si-An Chen (National Taiwan University)
Chun-Liang Li (Google)
Hsuan-Tien Lin (National Taiwan University)
Hsuan-Tien Lin

Professor Hsuan-Tien Lin received a B.S. in Computer Science and Information Engineering from National Taiwan University in 2001, an M.S. and a Ph.D. in Computer Science from California Institute of Technology in 2005 and 2008, respectively. He joined the Department of Computer Science and Information Engineering at National Taiwan University as an assistant professor in 2008 and has been promoted to full professor in 2017. Between 2016 and 2019, he worked as the Chief Data Scientist of Appier, a startup company that specializes in making AI easier for marketing. Currently, he keeps growing with Appier as its Chief Data Science Consultant. From the university, Prof. Lin received the Distinguished Teaching Awards in 2011 and 2021, the Outstanding Mentoring Award in 2013, and five Outstanding Teaching Awards between 2016 and 2020. He co-authored the introductory machine learning textbook Learning from Data and offered two popular Mandarin-teaching MOOCs Machine Learning Foundations and Machine Learning Techniques based on the textbook. He served in the machine learning community as Progam Co-chair of NeurIPS 2020, Expo Co-chair of ICML 2021, and Workshop Chair of NeurIPS 2022 and 2023. He co-led the teams that won the champion of KDDCup 2010, the double-champion of the two tracks in KDDCup 2011, the champion of track 2 in KDDCup 2012, and the double-champion of the two tracks in KDDCup 2013.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors