Timezone: »

 
Poster
Improved Techniques for Training GANs
Tim Salimans · Ian Goodfellow · Wojciech Zaremba · Vicki Cheung · Alec Radford · Peter Chen · Xi Chen

Mon Dec 05 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #166

We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: Our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.

Author Information

Tim Salimans (Algoritmica)
Ian Goodfellow (OpenAI)

Ian Goodfellow is a research scientist at OpenAI. He obtained a B.Sc. and M.Sc. from Stanford University in 2009. He worked on the Stanford AI Robot and interned at Willow Garage before beginning to study deep learning under the direction of Andrew Ng. He completed a PhD co-supervised by Yoshua Bengio and Aaron Courville in 2014. He invented generative adversarial networks shortly after completing his thesis and shortly before joining Google Brain. At Google, he co-developed an end-to-end deep learning system for recognizing addresses in Street View, studied machine learning security and privacy, and co-authored the MIT Press textbook, Deep Learning. In 2016 he left Google to join OpenAI, a non-profit whose machine is to build safe AI for the benefit of everyone.

Wojciech Zaremba (OpenAI)
Vicki Cheung (OpenAI)
Alec Radford (OpenAI)
Peter Chen (UC Berkeley and OpenAI)
Xi Chen (UC Berkeley and OpenAI)

Xi Chen is an associate professor with tenure at Stern School of Business at New York University, who is also an affiliated professor to Computer Science and Center for Data Science. Before that, he was a Postdoc in the group of Prof. Michael Jordan at UC Berkeley. He obtained his Ph.D. from the Machine Learning Department at Carnegie Mellon University (CMU). He studies high-dimensional statistical learning, online learning, large-scale stochastic optimization, and applications to operations. He has published more than 20 journal articles in statistics, machine learning, and operations, and 30 top machine learning peer-reviewed conference proceedings. He received NSF Career Award, ICSA Outstanding Young Researcher Award, Faculty Research Awards from Google, Adobe, Alibaba, and Bloomberg, and was featured in Forbes list of “30 Under30 in Science”.

More from the Same Authors