Timezone: »

PerspectiveNet: 3D Object Detection from a Single RGB Image via Perspective Points
Siyuan Huang · Yixin Chen · Tao Yuan · Siyuan Qi · Yixin Zhu · Song-Chun Zhu

Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #85

Detecting 3D objects from a single RGB image is intrinsically ambiguous, thus requiring appropriate prior knowledge and intermediate representations as constraints to reduce the uncertainties and improve the consistencies between the 2D image plane and the 3D world coordinate. To address this challenge, we propose to adopt perspective points as a new intermediate representation for 3D object detection, defined as the 2D projections of local Manhattan 3D keypoints to locate an object; these perspective points satisfy geometric constraints imposed by the perspective projection. We further devise PerspectiveNet, an end-to-end trainable model that simultaneously detects the 2D bounding box, 2D perspective points, and 3D object bounding box for each object from a single RGB image. PerspectiveNet yields three unique advantages: (i) 3D object bounding boxes are estimated based on perspective points, bridging the gap between 2D and 3D bounding boxes without the need of category-specific 3D shape priors. (ii) It predicts the perspective points by a template-based method, and a perspective loss is formulated to maintain the perspective constraints. (iii) It maintains the consistency between the 2D perspective points and 3D bounding boxes via a differentiable projective function. Experiments on SUN RGB-D dataset show that the proposed method significantly outperforms existing RGB-based approaches for 3D object detection.

Author Information

Siyuan Huang (University of California, Los Angeles)
Yixin Chen (UCLA)
Tao Yuan (UCLA)
Siyuan Qi (UCLA)
Yixin Zhu (University of California, Los Angeles)
Song-Chun Zhu (UCLA)

More from the Same Authors