Skip to yearly menu bar Skip to main content


Poster

Optimal Positive Generation via Latent Transformation for Contrastive Learning

Yinqi Li · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen

Keywords: [ GAN ] [ generative model ] [ contrastive learning ] [ Self-supervised learning ]


Abstract:

Contrastive learning, which learns to contrast positive with negative pairs of samples, has been popular for self-supervised visual representation learning. Although great effort has been made to design proper positive pairs through data augmentation, few works attempt to generate optimal positives for each instance. Inspired by semantic consistency and computational advantage in latent space of pretrained generative models, this paper proposes to learn instance-specific latent transformations to generate Contrastive Optimal Positives (COP-Gen) for self-supervised contrastive learning. Specifically, we formulate COP-Gen as an instance-specific latent space navigator which minimizes the mutual information between the generated positive pair subject to the semantic consistency constraint. Theoretically, the learned latent transformation creates optimal positives for contrastive learning, which removes as much nuisance information as possible while preserving the semantics. Empirically, using generated positives by COP-Gen consistently outperforms other latent transformation methods and even real-image-based methods in self-supervised contrastive learning.

Chat is not available.