Skip to yearly menu bar Skip to main content


Poster

Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation

István Sárándi · Gerard Pons-Moll

East Exhibit Hall A-C #1304
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

With the explosive growth of available training data, single-image 3D human modeling is ahead of a transition to a data-centric paradigm.A key to successfully exploiting data scale is to design flexible models that can be supervised from various heterogeneous data sources produced by different researchers or vendors.To this end, we propose a simple yet powerful paradigm for seamlessly unifying different human pose and shape-related tasks and datasets.Our formulation is centered on the ability - both at training and test time - to query any arbitrary point of the human volume, and obtain its estimated location in 3D.We achieve this by learning a continuous neural field of body point localizer functions, each of which is a differently parameterized 3D heatmap-based convolutional point localizer (detector).For generating parametric output, we propose an efficient post-processing step for fitting SMPL-family body models to nonparametric joint and vertex predictions.With this approach, we can naturally exploit differently annotated data sources including mesh, 2D/3D skeleton and dense pose, without having to convert between them, and thereby train large-scale 3D human mesh and skeleton estimation models that outperform the state-of-the-art on several public benchmarks including 3DPW, EMDB, EHF, SSP-3D and AGORA by a considerable margin.We release our code and models to foster downstream research.

Live content is unavailable. Log in and register to view live content