Joint Contrastive Learning with Infinite Possibilities
Qi Cai, Yu Wang, Yingwei Pan, Ting Yao, Tao Mei
Spotlight presentation: Orals & Spotlights Track 01: Representation/Relational
on 2020-12-07T19:20:00-08:00 - 2020-12-07T19:30:00-08:00
on 2020-12-07T19:20:00-08:00 - 2020-12-07T19:30:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling. We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL). JCL implicitly involves the simultaneous learning of an infinite number of query-key pairs, which poses tighter constraints when searching for invariant features. We derive an upper bound on this formulation that allows analytical solutions in an end-to-end training manner. While JCL is practically effective in numerous computer vision applications, we also theoretically unveil the certain mechanisms that govern the behavior of JCL. We demonstrate that the proposed formulation harbors an innate agency that strongly favors similarity within each instance-specific class, and therefore remains advantageous when searching for discriminative features among distinct instances. We evaluate these proposals on multiple benchmarks, demonstrating considerable improvements over existing algorithms. Code is publicly available at: https://github.com/caiqi/Joint-Contrastive-Learning.