Skip to yearly menu bar Skip to main content


Poster

HAP: Structure-Aware Masked Image Modeling for Human-Centric Perception

Junkun Yuan · Xinyu Zhang · Hao Zhou · Jian Wang · Zhongwei Qiu · Zhiyin Shao · Shaofeng Zhang · Sifan Long · Kun Kuang · Kun Yao · Junyu Han · Errui Ding · Lanfen Lin · Fei Wu · Jingdong Wang

Great Hall & Hall B1+B2 (level 1) #805

Abstract:

Model pre-training is essential in human-centric perception. In this paper, we first introduce masked image modeling (MIM) as a pre-training approach for this task. Upon revisiting the MIM training strategy, we reveal that human structure priors offer significant potential. Motivated by this insight, we further incorporate an intuitive human structure prior - human parts - into pre-training. Specifically, we employ this prior to guide the mask sampling process. Image patches, corresponding to human part regions, have high priority to be masked out. This encourages the model to concentrate more on body structure information during pre-training, yielding substantial benefits across a range of human-centric perception tasks. To further capture human characteristics, we propose a structure-invariant alignment loss that enforces different masked views, guided by the human part prior, to be closely aligned for the same image. We term the entire method as HAP. HAP simply uses a plain ViT as the encoder yet establishes new state-of-the-art performance on 11 human-centric benchmarks, and on-par result on one dataset. For example, HAP achieves 78.1% mAP on MSMT17 for person re-identification, 86.54% mA on PA-100K for pedestrian attribute recognition, 78.2% AP on MS COCO for 2D pose estimation, and 56.0 PA-MPJPE on 3DPW for 3D pose and shape estimation.

Chat is not available.