Skip to yearly menu bar Skip to main content


Poster

Incremental Scene Synthesis

Benjamin Planche · Xuejian Rong · Ziyan Wu · Srikrishna Karanam · Harald Kosch · YingLi Tian · Jan Ernst · ANDREAS HUTTER

East Exhibition Hall B, C #87

Keywords: [ Applications ] [ Computer Vision ] [ Memory-Augmented Neural Networks ] [ Deep Learning -> Generative Models; Deep Learning ]


Abstract:

We present a method to incrementally generate complete 2D or 3D scenes with the following properties: (a) it is globally consistent at each step according to a learned scene prior, (b) real observations of a scene can be incorporated while observing global consistency, (c) unobserved regions can be hallucinated locally in consistence with previous observations, hallucinations and global priors, and (d) hallucinations are statistical in nature, i.e., different scenes can be generated from the same observations. To achieve this, we model the virtual scene, where an active agent at each step can either perceive an observed part of the scene or generate a local hallucination. The latter can be interpreted as the agent's expectation at this step through the scene and can be applied to autonomous navigation. In the limit of observing real data at each point, our method converges to solving the SLAM problem. It can otherwise sample entirely imagined scenes from prior distributions. Besides autonomous agents, applications include problems where large data is required for building robust real-world applications, but few samples are available. We demonstrate efficacy on various 2D as well as 3D data.

Chat is not available.