Skip to yearly menu bar Skip to main content


Poster

Learning Commonality, Divergence and Variety for Unsupervised Visible-Infrared Person Re-identification

Jiangming Shi · Xiangbo Yin · Yachao Zhang · zhizhong zhang · Yuan Xie · Yanyun Qu

East Exhibit Hall A-C #1007
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Unsupervised visible-infrared person re-identification (USVI-ReID) aims to match specified people in infrared images to visible images without annotation, and vice versa. USVI-ReID is a challenging yet under-explored task. Most existing methods address the USVI-ReID problem using cluster-based contrastive learning, which simply employs the cluster center as a representation of a person. However, the cluster center primarily focuses on commonality, overlooking divergencies. To address the problem, we propose a Progressive Contrastive Learning with Hard and Dynamic Prototypes method for USVI-ReID. In brief, we first generate the hard prototype by selecting the sample with the maximum distance from the cluster center. We theoretically show that the hard prototype is used in the contrastive loss to emphasize divergence. Additionally, instead of rigidly aligning query images to a specific prototype, we generate the dynamic prototype by randomly picking samples within a cluster. The dynamic prototype is used to encourage the variety of features. Finally, we introduce a progressive learning strategy to gradually shift the model's attention towards divergence and variety, avoiding cluster deterioration. Extensive experiments conducted on the publicly available SYSU-MM01 and RegDB datasets validate the effectiveness of the proposed method. Our method outperforms the existing state-of-the-art method with an average mAP improvement of 3.9%. The source codes will be released.

Live content is unavailable. Log in and register to view live content