Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #901
Self-Supervised Pretraining for Large-Scale Point Clouds
Zaiwei Zhang · Min Bai · Erran Li Li
[ OpenReview

Pretraining on large unlabeled datasets has been proven to improve the down-stream task performance on many computer vision tasks, such as 2D object detection and video classification. However, for large-scale 3D scenes, such as outdoor LiDAR point clouds, pretraining is not widely used. Due to the special data characteristics of large 3D point clouds, 2D pretraining frameworks tend to not generalize well. In this paper, we propose a new self-supervised pretraining method that targets large-scale 3D scenes. We pretrain commonly used point-based and voxel-based model architectures and show the transfer learning performance on 3D object detection and also semantic segmentation. We demonstrate the effectiveness of our approach on both dense 3D indoor point clouds and also sparse outdoor lidar point clouds.