Timezone: »
In this paper, we address the "dual problem" of multi-view scene reconstruction in which we utilize single-view images captured under different point lights to learn a neural scene representation. Different from existing single-view methods which can only recover a 2.5D scene representation (i.e., a normal / depth map for the visible surface), our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene. Instead of relying on multi-view photo-consistency, our method exploits two information-rich monocular cues, namely shading and shadow, to infer scene geometry. Experiments on multiple challenging datasets show that our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images. Thanks to the neural reflectance field representation, our method is robust to depth discontinuities. It supports applications like novel-view synthesis and relighting. Our code and model can be found at https://ywq.github.io/s3nerf.
Author Information
Wenqi Yang (The University of Hong Kong)
Guanying Chen (The Chinese University of Hong Kong)
Chaofeng Chen (Nanyang Technological University)
I am currently a postdoctoral research fellow at S-Lab in Nanyang Technological University, working with Prof. Weisi Lin. I received my Ph.D. degree from Dept. of Computer Science at the University of Hong Kong in January 2021. I did my Ph.D. research at Computer Vision Lab in HKU, advised by Dr. Kenneth K.Y. Wong. Prior to studying at HKU, I received my B.Eng. from Huazhong University of Science and Technology. Current research topics include: - Low level vision, including image quality assessment, restoration and enhancement. - Image-to-image translation - 3D-aware image synthesis/rendering/editing - Face related tasks
Zhenfang Chen (The University of Hong Kong)
Kwan-Yee K. Wong (The University of Hong Kong)
More from the Same Authors
-
2021 : STAR: A Benchmark for Situated Reasoning in Real-World Videos »
Bo Wu · Shoubin Yu · Zhenfang Chen · Josh Tenenbaum · Chuang Gan -
2022 : Planning with Large Language Models for Code Generation »
Shun Zhang · Zhenfang Chen · Yikang Shen · Mingyu Ding · Josh Tenenbaum · Chuang Gan -
2021 Poster: Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language »
Mingyu Ding · Zhenfang Chen · Tao Du · Ping Luo · Josh Tenenbaum · Chuang Gan