`

Timezone: »

 
Poster
3D Siamese Voxel-to-BEV Tracker for Sparse Point Clouds
Le Hui · Lingpeng Wang · Mingmei Cheng · Jin Xie · Jian Yang

Fri Dec 10 08:30 AM -- 10:00 AM (PST) @

3D object tracking in point clouds is still a challenging problem due to the sparsity of LiDAR points in dynamic environments. In this work, we propose a Siamese voxel-to-BEV tracker, which can significantly improve the tracking performance in sparse 3D point clouds. Specifically, it consists of a Siamese shape-aware feature learning network and a voxel-to-BEV target localization network. The Siamese shape-aware feature learning network can capture 3D shape information of the object to learn the discriminative features of the object so that the potential target from the background in sparse point clouds can be identified. To this end, we first perform template feature embedding to embed the template's feature into the potential target and then generate a dense 3D shape to characterize the shape information of the potential target. For localizing the tracked target, the voxel-to-BEV target localization network regresses the target's 2D center and the z-axis center from the dense bird's eye view (BEV) feature map in an anchor-free manner. Concretely, we compress the voxelized point cloud along z-axis through max pooling to obtain a dense BEV feature map, where the regression of the 2D center and the z-axis center can be performed more effectively. Extensive evaluation on the KITTI tracking dataset shows that our method significantly outperforms the current state-of-the-art methods by a large margin. Code is available at https://github.com/fpthink/V2B.

Author Information

Le Hui (Nanjing University of Science and Technology)
Lingpeng Wang (Nanjing University of Science and Technology)
Mingmei Cheng (Nanjing University of Science and Technology)
Jin Xie (Department of Computer Science, Nanjing University of Science and Technology)
Jian Yang (Nanjing University of Science and Technology)

More from the Same Authors