Timezone: »
We formalize and attack the problem of generating new images from old ones that are as diverse as possible, only allowing them to change without restrictions in certain parts of the image while remaining globally consistent. This encompasses the typical situation found in generative modelling, where we are happy with parts of the generated data, but would like to resample others (``I like this generated castle overall, but this tower looks unrealistic, I would like a new one''). In order to attack this problem we build from the best conditional and unconditional generative models to introduce a new network architecture, training procedure, and a new algorithm for resampling parts of the image as desired.
Author Information
Sarah Hong (Latent Space)
Martin Arjovsky (École Normale Supérieure)
Darryl Barnhart (Latent Space)
Ian Thompson (Latent Space)
More from the Same Authors
-
2018 Workshop: Causal Learning »
Martin Arjovsky · Christina Heinze-Deml · Anna Klimovskaia · Maxime Oquab · Leon Bottou · David Lopez-Paz -
2017 Poster: Improved Training of Wasserstein GANs »
Ishaan Gulrajani · Faruk Ahmed · Martin Arjovsky · Vincent Dumoulin · Aaron Courville