Timezone: »

 
Poster
Learning What and Where to Draw
Scott E Reed · Zeynep Akata · Santosh Mohan · Samuel Tenka · Bernt Schiele · Honglak Lee

Wed Dec 07 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #176

Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 × 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations.

Author Information

Scott E Reed (University of Michigan)
Zeynep Akata (Max Planck Institute for Informatics)
Santosh Mohan (University of MIchigan)
Samuel Tenka (University of MIchigan)
Bernt Schiele (Max Planck Institute for Informatics)
Honglak Lee (University of Michigan)

More from the Same Authors