Skip to yearly menu bar Skip to main content


Poster

EfficientCAPER: An End-to-End Framework for Fast and Robust Category-Level Articulated Object Pose Estimation

Xinyi Yu · Haonan Jiang · Liu Liu · Lin Wu · Li Zhang · Linlin Ou

East Exhibit Hall A-C #4810
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Human life is populated with articulated objects. Pose estimation for category-level articulated objects is a significant challenge due to their inherent complexity and diverse kinematic structures. Current methods for this task usually meet the problems of insufficient consideration of kinematic constraints, self-occlusion, and optimization requirements. In this paper, we propose EfficientCAPER, an end-to-end Category-level Articulated object Pose EstimatoR, eliminating the need for optimization functions as post-processing and utilizing the kinematic structure for joint-centric pose modeling, thus enhancing the efficiency and applicability. Given a partial point cloud as input, the EfficientCAPER firstly estimates the pose for the free part of an articulated object using decoupled rotation representation. Next, we canonicalize the input point cloud to estimate constrained parts' poses by predicting the joint parameters and states as replacements. Evaluations on three diverse datasets, ArtImage, ReArtMix, and RobotArm, show EfficientCAPER's effectiveness and generalization ability to real-world scenarios. The framework exhibits excellent static pose estimation performance for articulated objects, contributing to the advancement of category-level pose estimation. Codes will be made publicly available.

Live content is unavailable. Log in and register to view live content