This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Joint Contrastive Learning with Infinite Possibilities

Qi Cai, Yu Wang, Yingwei Pan, Ting Yao, Tao Mei

Spotlight presentation: Orals & Spotlights Track 01: Representation/Relational
on 2020-12-07T19:20:00-08:00 - 2020-12-07T19:30:00-08:00
Poster Session 1 (more posters)
on 2020-12-07T21:00:00-08:00 - 2020-12-07T23:00:00-08:00
Abstract: This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling. We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL). JCL implicitly involves the simultaneous learning of an infinite number of query-key pairs, which poses tighter constraints when searching for invariant features. We derive an upper bound on this formulation that allows analytical solutions in an end-to-end training manner. While JCL is practically effective in numerous computer vision applications, we also theoretically unveil the certain mechanisms that govern the behavior of JCL. We demonstrate that the proposed formulation harbors an innate agency that strongly favors similarity within each instance-specific class, and therefore remains advantageous when searching for discriminative features among distinct instances. We evaluate these proposals on multiple benchmarks, demonstrating considerable improvements over existing algorithms. Code is publicly available at: https://github.com/caiqi/Joint-Contrastive-Learning.

Preview Video and Chat

To see video, interact with the author and ask questions please use registration and login.