Timezone: »
Recent history has seen a tremendous growth of work exploring implicit representations of geometry and radiance, popularized through Neural Radiance Fields (NeRF). Such works are fundamentally based on a (implicit) {\em volumetric} representation of occupancy, allowing them to model diverse scene structure including translucent objects and atmospheric obscurants. But because the vast majority of real-world scenes are composed of well-defined surfaces, we introduce a {\em surface} analog of such implicit models called Neural Reflectance Surfaces (NeRS). NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions. Even more importantly, surface parameterizations allow NeRS to learn (neural) bidirectional surface reflectance functions (BRDFs) that factorize view-dependent appearance into environmental illumination, diffuse color (albedo), and specular “shininess.” Finally, rather than illustrating our results on synthetic scenes or controlled in-the-lab capture, we assemble a novel dataset of multi-view images from online marketplaces for selling goods. Such “in-the-wild” multi-view image sets pose a number of challenges, including a small number of views with unknown/rough camera estimates. We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions. We hope that NeRS serves as a first step toward building scalable, high-quality libraries of real-world shape, materials, and illumination.
Author Information
Jason Zhang (Carnegie Mellon University)
Gengshan Yang (Carnegie Mellon University)
Shubham Tulsiani (UC Berkeley)
Deva Ramanan (Carnegie Mellon University)
More from the Same Authors
-
2021 Spotlight: ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction »
Gengshan Yang · Deqing Sun · Varun Jampani · Daniel Vlasic · Forrester Cole · Ce Liu · Deva Ramanan -
2021 : Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting »
Benjamin Wilson · William Qi · Tanmay Agarwal · John Lambert · Jagjeet Singh · Siddhesh Khandelwal · Bowen Pan · Ratnesh Kumar · Andrew Hartnett · Jhony Kaesemodel Pontes · Deva Ramanan · Peter Carr · James Hays -
2021 : The CLEAR Benchmark: Continual LEArning on Real-World Imagery »
Zhiqiu Lin · Jia Shi · Deepak Pathak · Deva Ramanan -
2022 Poster: Continual Learning with Evolving Class Ontologies »
Zhiqiu Lin · Deepak Pathak · Yu-Xiong Wang · Deva Ramanan · Shu Kong -
2022 Poster: Learning to Discover and Detect Objects »
Vladimir Fomenko · Ismail Elezi · Deva Ramanan · Laura Leal-Taixé · Aljosa Osep -
2021 Poster: No RL, No Simulation: Learning to Navigate without Navigating »
Meera Hahn · Devendra Singh Chaplot · Shubham Tulsiani · Mustafa Mukadam · James Rehg · Abhinav Gupta -
2021 Poster: ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction »
Gengshan Yang · Deqing Sun · Varun Jampani · Daniel Vlasic · Forrester Cole · Ce Liu · Deva Ramanan -
2019 Poster: Volumetric Correspondence Networks for Optical Flow »
Gengshan Yang · Deva Ramanan -
2017 Poster: Learning to Model the Tail »
Yu-Xiong Wang · Deva Ramanan · Martial Hebert -
2017 Poster: Attentional Pooling for Action Recognition »
Rohit Girdhar · Deva Ramanan