Timezone: »

Image-to-image translation for cross-domain disentanglement
Abel Gonzalez-Garcia · Joost van de Weijer · Yoshua Bengio

Thu Dec 06 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #152

Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of cross-domain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. Our model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains, perform domain- specific image transfer and interpolation, and cross-domain retrieval without the need of labeled data, only paired images. We compare our model to the state-of-the-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets.

Author Information

Abel Gonzalez-Garcia (Computer Vision Center)
Joost van de Weijer (Computer Vision Center Barcelona)
Yoshua Bengio (U. Montreal)

More from the Same Authors