Skip to yearly menu bar Skip to main content


Poster
in
Workshop: CtrlGen: Controllable Generative Modeling in Language and Vision

Learning Representations for Zero-Shot Image Generation without Text

Gautam Singh · Fei Deng · Sungjin Ahn


Abstract:

DALL-E has shown an impressive ability to generate novel --- significantly and systematically different from the training distribution --- yet realistic images. This is possible because it utilizes the dataset of text-image pairs where the text provides the source of compositionality. Following this result, an important extending question is whether this compositionality can still be achieved even without conditioning on text. In this paper, we propose a simple but novel slot-based autoencoding architecture, called SLATE, that achieves this text-free DALL-E by learning compositional slot-based representations purely from images, an ability lacking in DALL-E. Unlike existing object-centric representation models that decode pixels independently for each slot and each pixel location and compose them via mixture-based alpha composition, we propose to use the Image GPT decoder conditioned on the slots for a more flexible generation by capturing complex interaction among the pixels and the slots. In experiments, we show that this simple architecture achieves zero-shot generation of novel images without text and better quality in generation than the models based on mixture decoders.

Chat is not available.