Timezone: »
Emerging neural radiance fields (NeRF) are a promising scene representation for computer graphics, enabling high-quality 3D reconstruction and novel view synthesis from image observations.However, editing a scene represented by a NeRF is challenging, as the underlying connectionist representations such as MLPs or voxel grids are not object-centric or compositional.In particular, it has been difficult to selectively edit specific regions or objects.In this work, we tackle the problem of semantic scene decomposition of NeRFs to enable query-based local editing of the represented 3D scenes.We propose to distill the knowledge of off-the-shelf, self-supervised 2D image feature extractors such as CLIP-LSeg or DINO into a 3D feature field optimized in parallel to the radiance field.Given a user-specified query of various modalities such as text, an image patch, or a point-and-click selection, 3D feature fields semantically decompose 3D space without the need for re-training, and enables us to semantically select and edit regions in the radiance field.Our experiments validate that the distilled feature fields can transfer recent progress in 2D vision and language foundation models to 3D scene representations, enabling convincing 3D segmentation and selective editing of emerging neural graphics representations.
Author Information
Sosuke Kobayashi (Preferred Networks)
Eiichi Matsumoto (Preferred Networks, Inc.)
Vincent Sitzmann (MIT)
Vincent is an incoming Assistant Professor at MIT EECS, where he will lead the Scene Representation Group (scenerepresentations.org). Currently, he is a Postdoc at MIT's CSAIL with Josh Tenenbaum, Bill Freeman, and Fredo Durand. He finished his Ph.D. at Stanford University. His research interest lies in neural scene representations - the way neural networks learn to represent information on our world. His goal is to allow independent agents to reason about our world given visual observations, such as inferring a complete model of a scene with information on geometry, material, lighting etc. from only few observations, a task that is simple for humans, but currently impossible for AI.
More from the Same Authors
-
2021 Spotlight: Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering »
Vincent Sitzmann · Semon Rezchikov · Bill Freeman · Josh Tenenbaum · Fredo Durand -
2021 : 3D Neural Scene Representations for Visuomotor Control »
Yunzhu Li · Shuang Li · Vincent Sitzmann · Pulkit Agrawal · Antonio Torralba -
2021 : 3D Neural Scene Representations for Visuomotor Control »
Yunzhu Li · Shuang Li · Vincent Sitzmann · Pulkit Agrawal · Antonio Torralba -
2021 : 3D Neural Scene Representations for Visuomotor Control »
Yunzhu Li · Shuang Li · Vincent Sitzmann · Pulkit Agrawal · Antonio Torralba -
2021 Poster: Learning Signal-Agnostic Manifolds of Neural Fields »
Yilun Du · Katie Collins · Josh Tenenbaum · Vincent Sitzmann -
2021 Poster: Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering »
Vincent Sitzmann · Semon Rezchikov · Bill Freeman · Josh Tenenbaum · Fredo Durand -
2020 Poster: Implicit Neural Representations with Periodic Activation Functions »
Vincent Sitzmann · Julien N.P Martel · Alexander Bergman · David Lindell · Gordon Wetzstein -
2020 Poster: MetaSDF: Meta-Learning Signed Distance Functions »
Vincent Sitzmann · Eric Chan · Richard Tucker · Noah Snavely · Gordon Wetzstein -
2020 Oral: Implicit Neural Representations with Periodic Activation Functions »
Vincent Sitzmann · Julien N.P Martel · Alexander Bergman · David Lindell · Gordon Wetzstein -
2019 Poster: Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations »
Vincent Sitzmann · Michael Zollhoefer · Gordon Wetzstein -
2019 Oral: Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations »
Vincent Sitzmann · Michael Zollhoefer · Gordon Wetzstein