Timezone: »
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations. It uses pairs of augmentations of unlabeled training examples to define a classification task for pretext learning of a deep embedding. Despite extensive works in augmentation procedures, prior works do not address the selection of challenging negative pairs, as images within a sampled batch are treated independently. This paper addresses the problem, by introducing a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE. When compared to standard CL, the use of adversarial examples creates more challenging positive pairs and adversarial training produces harder negative pairs by accounting for all images in a batch during the optimization. CLAE is compatible with many CL methods in the literature. Experiments show that it improves the performance of several existing CL baselines on multiple datasets.
Author Information
Chih-Hui Ho (University of California San Diego)
Nuno Nvasconcelos (UC San Diego)
More from the Same Authors
-
2022 Poster: DISCO: Adversarial Defense with Local Implicit Functions »
Chih-Hui Ho · Nuno Vasconcelos -
2020 Poster: Learning Representations from Audio-Visual Spatial Alignment »
Pedro Morgado · Yi Li · Nuno Nvasconcelos -
2019 Poster: Deliberative Explanations: visualizing network insecurities »
Pei Wang · Nuno Nvasconcelos -
2018 Poster: Self-Supervised Generation of Spatial Audio for 360° Video »
Pedro Morgado · Nuno Nvasconcelos · Timothy Langlois · Oliver Wang -
2016 Poster: Large Margin Discriminant Dimensionality Reduction in Prediction Space »
Ehsan Saberian · Jose Costa Pereira · Nuno Nvasconcelos · Can Xu