`

Timezone: »

 
Poster
MetaSDF: Meta-Learning Signed Distance Functions
Vincent Sitzmann · Eric Chan · Richard Tucker · Noah Snavely · Gordon Wetzstein

Tue Dec 08 09:00 PM -- 11:00 PM (PST) @ Poster Session 2 #691

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution. Generalizing across shapes with such neural implicit representations amounts to learning priors over the respective function space and enables geometry reconstruction from partial or noisy observations. Existing generalization methods rely on conditioning a neural network on a low-dimensional latent code that is either regressed by an encoder or jointly optimized in the auto-decoder framework. Here, we formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task. We demonstrate that this approach performs on par with auto-decoder based approaches while being an order of magnitude faster at test-time inference. We further demonstrate that the proposed gradient-based method outperforms encoder-decoder based methods that leverage pooling-based set encoders.

Author Information

Vincent Sitzmann (MIT)

Vincent is a fourth year Ph.D. student in the Stanford Computational Imaging Laboratory, advised by Prof. Gordon Wetzstein. His research interest lies in 3D-structure-aware neural scene representations - a novel way for AI to represent information on our 3D world. The goal is to allow AI to perform intelligent 3D reasoning, such as inferring a complete model of a scene with information on geoemetry, material, lighting etc. from only few observations, a task that is simple for humans, but currently impossible for AI.

Eric Chan (Stanford University)
Richard Tucker (Google)
Noah Snavely (Cornell University and Google AI)
Gordon Wetzstein (Stanford University)

More from the Same Authors