Skip to yearly menu bar Skip to main content


Poster

Coupled Generative Adversarial Networks

Ming-Yu Liu · Oncel Tuzel

Area 5+6+7+8 #138

Keywords: [ Deep Learning or Neural Networks ] [ (Other) Unsupervised Learning Methods ] [ (Application) Computer Vision ]


Abstract:

We propose the coupled generative adversarial nets (CoGAN) framework for generating pairs of corresponding images in two different domains. The framework consists of a pair of generative adversarial nets, each responsible for generating images in one domain. We show that by enforcing a simple weight-sharing constraint, the CoGAN learns to generate pairs of corresponding images without existence of any pairs of corresponding images in the two domains in the training set. In other words, the CoGAN learns a joint distribution of images in the two domains from images drawn separately from the marginal distributions of the individual domains. This is in contrast to the existing multi-modal generative models, which require corresponding images for training. We apply the CoGAN to several pair image generation tasks. For each task, the CoGAN learns to generate convincing pairs of corresponding images. We further demonstrate the applications of the CoGAN framework for the domain adaptation and cross-domain image generation tasks.

Live content is unavailable. Log in and register to view live content