Timezone: »
Contrastive learning has become a key component of self-supervised learning approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation and large sets of negatives are both crucial in learning such representations. At the same time, data mixing strategies, either at the image or the feature level, improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, i.e. the effect of hard negatives, has so far been neglected. To get more meaningful negative samples, current top contrastive self-supervised learning approaches either substantially increase the batch sizes, or keep very large memory banks; increasing memory requirements, however, leads to diminishing returns in terms of performance. We therefore start by delving deeper into a top-performing framework and show evidence that harder negatives are needed to facilitate better and faster learning. Based on these observations, and motivated by the success of data mixing, we propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead. We exhaustively ablate our approach on linear classification, object detection, and instance segmentation and show that employing our hard negative mixing procedure improves the quality of visual representations learned by a state-of-the-art self-supervised learning method.
Author Information
Yannis Kalantidis (NAVER LABS Europe)
Mert Bulent Sariyildiz (NAVER LABS Europe)
Noe Pion (NAVER Labs Europe)
Philippe Weinzaepfel (NAVER LABS Europe)
Diane Larlus (NAVER LABS Europe)
More from the Same Authors
-
2021 : Concept Generalization in Visual Representation Learning »
· Yannis Kalantidis · Diane Larlus · Karteek Alahari -
2022 Poster: CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion »
Philippe Weinzaepfel · Vincent Leroy · Thomas Lucas · Romain BRÉGIER · Yohann Cabon · Vaibhav ARORA · Leonid Antsfeld · Boris Chidlovskii · Gabriela Csurka · Jerome Revaud -
2021 Workshop: ImageNet: Past, Present, and Future »
Zeynep Akata · Lucas Beyer · Sanghyuk Chun · A. Sophia Koepke · Diane Larlus · Seong Joon Oh · Rafael Rezende · Sangdoo Yun · Xiaohua Zhai -
2020 Poster: SuperLoss: A Generic Loss for Robust Curriculum Learning »
Thibault Castells · Philippe Weinzaepfel · Jerome Revaud -
2019 Poster: R2D2: Reliable and Repeatable Detector and Descriptor »
Jerome Revaud · Cesar De Souza · Martin Humenberger · Philippe Weinzaepfel -
2019 Oral: R2D2: Reliable and Repeatable Detector and Descriptor »
Jerome Revaud · Cesar De Souza · Martin Humenberger · Philippe Weinzaepfel -
2018 Poster: A^2-Nets: Double Attention Networks »
Yunpeng Chen · Yannis Kalantidis · Jianshu Li · Shuicheng Yan · Jiashi Feng