Skip to yearly menu bar Skip to main content


Poster

Probabilistic Neural Programmed Networks for Scene Generation

Zhiwei Deng · Jiacheng Chen · YIFANG FU · Greg Mori

Room 210 #7

Keywords: [ Generative Models ] [ Computer Vision ] [ Deep Autoencoders ]


Abstract:

In this paper we address the text to scene image generation problem. Generative models that capture the variability in complicated scenes containing rich semantics is a grand goal of image generation. Complicated scene images contain rich visual elements, compositional visual concepts, and complicated relations between objects. Generative models, as an analysis-by-synthesis process, should encompass the following three core components: 1) the generation process that composes the scene; 2) what are the primitive visual elements and how are they composed; 3) the rendering of abstract concepts into their pixel-level realizations. We propose PNP-Net, a variational auto-encoder framework that addresses these three challenges: it flexibly composes images with a dynamic network structure, learns a set of distribution transformers that can compose distributions based on semantics, and decodes samples from these distributions into realistic images.

Live content is unavailable. Log in and register to view live content