Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Learning-Based Solutions for Inverse Problems

Multimodal Neural Surface Reconstruction: Recovering the Geometry and Appearance of 3D Scenes from Events and Grayscale Images

Sazan Mahbub · Brandon Feng · Chris Metzler

Keywords: [ Deep Learning ] [ disentangled learning ] [ multimodal data integration ] [ neural surface reconstruction ]


Abstract:

Event cameras offer high frame rates, minimal motion blur, and excellent dynamic range. As a result they excel at reconstructing the geometry of 3D scenes. However, their measurements do not contain absolute intensity information, which can make accurately reconstructing the appearance of 3D scenes from events challenging. In this work, we develop a multimodal neural 3D scene reconstruction framework that simultaneously reconstructs scene geometry from events and scene appearance from grayscale images. Our framework---which is based on neural surface representations, as opposed to the neural radiance fields used in previous works---is able to reconstruct both the structure and appearance of 3D scenes more accurately than existing unimodal reconstruction methods.

Chat is not available.