Timezone: »
Given the ever-increasing computational costs of modern machine learning models, we need to find new ways to reuse such expert models and thus tap into the resources that have been invested in their creation. Recent work suggests that the power of these massive models is captured by the representations they learn. Therefore, we seek a model that can relate between different existing representations and propose to solve this task with a conditionally invertible network. This network demonstrates its capability by (i) providing generic transfer between diverse domains, (ii) enabling controlled content synthesis by allowing modification in other domains, and (iii) facilitating diagnosis of existing representations by translating them into interpretable domains such as images. Our domain transfer network can translate between fixed representations without having to learn or finetune them. This allows users to utilize various existing domain-specific expert models from the literature that had been trained with extensive computational resources. Experiments on diverse conditional image synthesis tasks, competitive image modification results and experiments on image-to-image and text-to-image generation demonstrate the generic applicability of our approach. For example, we translate between BERT and BigGAN, state-of-the-art text and image models to provide text-to-image generation, which neither of both experts can perform on their own.
Author Information
Robin Rombach (Heidelberg University)
Patrick Esser (Heidelberg University)
Bjorn Ommer (Heidelberg University)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Oral: Network-to-Network Translation with Conditional Invertible Neural Networks »
Wed. Dec 9th 02:00 -- 02:15 PM Room Orals & Spotlights: Probabilistic/Causality
More from the Same Authors
-
2020 : A Note on Data Biases in Generative Models »
Patrick Esser -
2022 Poster: Retrieval-Augmented Diffusion Models »
Andreas Blattmann · Robin Rombach · Kaan Oktay · Jonas Müller · Björn Ommer -
2021 Poster: Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning »
Timo Milbich · Karsten Roth · Samarth Sinha · Ludwig Schmidt · Marzyeh Ghassemi · Bjorn Ommer -
2021 Poster: ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis »
Patrick Esser · Robin Rombach · Andreas Blattmann · Bjorn Ommer -
2020 : 16 - An Image is Worth 16 × 16 Tokens: Visual Priors for Efficient Image Synthesis with Transformers »
Robin Rombach -
2016 Poster: CliqueCNN: Deep Unsupervised Exemplar Learning »
Miguel A Bautista · Artsiom Sanakoyeu · Ekaterina Tikhoncheva · Bjorn Ommer -
2012 Poster: Visual Recognition using Embedded Feature Selection for Curvature Self-Similarity »
Angela Eigenstetter · Bjorn Ommer