Skip to yearly menu bar Skip to main content


Poster

SS3DM: Benchmarking Street-View Surface Reconstruction with a Synthetic 3D Mesh Dataset

Yubin Hu · Kairui Wen · Heng Zhou · Xiaoyang Guo · Yong-jin Liu

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: Reconstructing accurate 3D surfaces for street-view scenarios is vital for applications such as digital entertainment and autonomous driving simulation. However, existing street-view datasets, including KITTI, Waymo, and nuScenes, only offer noisy LiDAR points as ground-truth data for geometric evaluation of reconstructed surfaces. These geometric ground-truths often lack the necessary precision to evaluate surface positions and do not provide data for assessing surface normals. To overcome these challenges, we introduce the SS3DM dataset, which consists of precise $\textbf{S}$ynthetic $\textbf{S}$treet-view $\textbf{3D}$ $\textbf{M}$esh models exported from the CARLA simulator. These mesh models enable accurate position evaluation and include normal vectors for surface normal assessment. To simulate the input data in realistic driving scenarios for 3D reconstruction, we virtually drive a car mounted with six RGB cameras and five LiDAR sensors in various outdoor scenes. Based on this dataset, we establish a benchmark for state-of-the-art surface reconstruction methods, offering a comprehensive evaluation of the associated challenges. The SS3DM dataset, data exportation plugin, and benchmark code will be made publicly available.

Live content is unavailable. Log in and register to view live content