Skip to yearly menu bar Skip to main content


Poster

DreamScene: Layout-Guided 3D Scene Generation

Xiuyu Yang · Yunze Man · Junkun Chen · Yu-Xiong Wang

East Exhibit Hall A-C #2500
[ ] [ Project Page ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The creation of complex 3D scenes tailored to user specifications has been a tedious and challenging task with traditional 3D modeling tools. Although some pioneering works have achieved automatic text-to-3D generation, they are generally limited to small-scale scenes with restricted control over the shape and texture. We introduce DreamScene, a novel method to generate detailed indoor scenes that adhere to the spatial layout preferences and textual descriptions provided by users. Central to our approach is a projection-based approach to convert 3D semantic layout into multi-view 2D proxy maps. Furthermore, we design a semantic and depth conditioned diffusion model to generate multi-view images, which are used to learn a neural radiance field (NeRF) as the final scene representation. Through experimental analysis, we demonstrate that our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures and consistent and realistic geometry. We will open-source our code and processed dataset.

Live content is unavailable. Log in and register to view live content