Timezone: »

Deep Network for the Integrated 3D Sensing of Multiple People in Natural Images
Andrei Zanfir · Elisabeta Marinoiu · Mihai Zanfir · Alin-Ionut Popa · Cristian Sminchisescu

Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #120

We present MubyNet -- a feed-forward, multitask, bottom up system for the integrated localization, as well as 3d pose and shape estimation, of multiple people in monocular images. The challenge is the formal modeling of the problem that intrinsically requires discrete and continuous computation, e.g. grouping people vs. predicting 3d pose. The model identifies human body structures (joints and limbs) in images, groups them based on 2d and 3d information fused using learned scoring functions, and optimally aggregates such responses into partial or complete 3d human skeleton hypotheses under kinematic tree constraints, but without knowing in advance the number of people in the scene and their visibility relations. We design a multi-task deep neural network with differentiable stages where the person grouping problem is formulated as an integer program based on learned body part scores parameterized by both 2d and 3d information. This avoids suboptimality resulting from separate 2d and 3d reasoning, with grouping performed based on the combined representation. The final stage of 3d pose and shape prediction is based on a learned attention process where information from different human body parts is optimally integrated. State-of-the-art results are obtained in large scale datasets like Human3.6M and Panoptic, and qualitatively by reconstructing the 3d shape and pose of multiple people, under occlusion, in difficult monocular images.

Author Information

Andrei Zanfir (Institute of Mathematics of the Romanian Academy)
Elisabeta Marinoiu (IMAR)
Mihai Zanfir (IMAR)
Alin-Ionut Popa (IMAR)
Cristian Sminchisescu (LTH)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors