Timezone: »
Unsupervised learning methods based on contrastive learning have drawn increasing attention and achieved promising results. Most of them aim to learn representations invariant to instance-level variations, which are provided by different views of the same instance. In this paper, we propose Invariance Propagation to focus on learning representations invariant to category-level variations, which are provided by different instances from the same category. Our method recursively discovers semantically consistent samples residing in the same high-density regions in representation space. We demonstrate a hard sampling strategy to concentrate on maximizing the agreement between the anchor sample and its hard positive samples, which provide more intra-class variations to help capture more abstract invariance. As a result, with a ResNet-50 as the backbone, our method achieves 71.3% top-1 accuracy on ImageNet linear classification and 78.2% top-5 accuracy fine-tuning on only 1% labels, surpassing previous results. We also achieve state-of-the-art performance on other downstream tasks, including linear classification on Places205 and Pascal VOC, and transfer learning on small scale datasets.
Author Information
Feng Wang (Tsinghua University)
Huaping Liu (Tsinghua University)
Di Guo (Tsinghua University)
Sun Fuchun (Tsinghua university)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Unsupervised Representation Learning by Invariance Propagation »
Tue. Dec 8th 04:20 -- 04:30 AM Room Orals & Spotlights: Representation/Relational
More from the Same Authors
-
2019 Poster: Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement »
Chao Yang · Xiaojian Ma · Wenbing Huang · Fuchun Sun · Huaping Liu · Junzhou Huang · Chuang Gan -
2019 Spotlight: Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement »
Chao Yang · Xiaojian Ma · Wenbing Huang · Fuchun Sun · Huaping Liu · Junzhou Huang · Chuang Gan