Skip to yearly menu bar Skip to main content


Poster

Self-supervised surround-view depth estimation with volumetric feature fusion

Jung-Hee Kim · Junhwa Hur · Tien Phuoc Nguyen · Seong-Gyun Jeong

Hall J (level 1) #901

Keywords: [ Monocular depth ] [ Surround-view depth estimation ] [ Depth synthesis ] [ Self-supervised learning ]


Abstract:

We present a self-supervised depth estimation approach using a unified volumetric feature fusion for surround-view images. Given a set of surround-view images, our method constructs a volumetric feature map by extracting image feature maps from surround-view images and fuse the feature maps into a shared, unified 3D voxel space. The volumetric feature map then can be used for estimating a depth map at each surround view by projecting it into an image coordinate. A volumetric feature contains 3D information at its local voxel coordinate; thus our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the volumetric feature map into the target viewpoints. Furthermore, assuming static camera extrinsics in the multi-camera system, we propose to estimate a canonical camera motion from the volumetric feature map. Our method leverages 3D spatio- temporal context to learn metric-scale depth and the canonical camera motion in a self-supervised manner. Our method outperforms the prior arts on DDAD and nuScenes datasets, especially estimating more accurate metric-scale depth and consistent depth between neighboring views.

Chat is not available.