Skip to yearly menu bar Skip to main content


NAP: Neural 3D Articulated Object Prior

Jiahui Lei · Congyue Deng · William B Shen · Leonidas Guibas · Kostas Daniilidis

Great Hall & Hall B1+B2 (level 1) #2015
[ ]
Wed 13 Dec 3 p.m. PST — 5 p.m. PST


We propose Neural 3D Articulated object Prior (NAP), the first 3D deep generative model to synthesize 3D articulated object models. Despite the extensive research on generating 3D static objects, compositions, or scenes, there are hardly any approaches on capturing the distribution of articulated objects, a common object category for human and robot interaction. To generate articulated objects, we first design a novel articulation tree/graph parameterization and then apply a diffusion-denoising probabilistic model over this representation where articulated objects can be generated via denoising from random complete graphs. In order to capture both the geometry and the motion structure whose distribution will affect each other, we design a graph denoising network for learning the reverse diffusion process. We propose a novel distance that adapts widely used 3D generation metrics to our novel task to evaluate generation quality. Experiments demonstrate our high performance in articulated object generation as well as its applications on conditioned generation, including Part2Motion, PartNet-Imagination, Motion2Part, and GAPart2Object.

Chat is not available.