Timezone: »
Poster
Energy-Based Contrastive Learning of Visual Representations
Beomsu Kim · Jong Chul Ye
Contrastive learning is a method of learning visual representations by training Deep Neural Networks (DNNs) to increase the similarity between representations of positive pairs (transformations of the same image) and reduce the similarity between representations of negative pairs (transformations of different images). Here we explore Energy-Based Contrastive Learning (EBCLR) that leverages the power of generative learning by combining contrastive learning with Energy-Based Models (EBMs). EBCLR can be theoretically interpreted as learning the joint distribution of positive pairs, and it shows promising results on small and medium-scale datasets such as MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100. Specifically, we find EBCLR demonstrates from $\times 4$ up to $\times 20$ acceleration compared to SimCLR and MoCo v2 in terms of training epochs. Furthermore, in contrast to SimCLR, we observe EBCLR achieves nearly the same performance with $254$ negative pairs (batch size $128$) and $30$ negative pairs (batch size $16$) per positive pair, demonstrating the robustness of EBCLR to small numbers of negative pairs. Hence, EBCLR provides a novel avenue for improving contrastive learning methods that usually require large datasets with a significant number of negative pairs per iteration to achieve reasonable performance on downstream tasks. Code: https://github.com/1202kbs/EBCLR
Author Information
Beomsu Kim (Korea Advanced Institute of Science and Technology)
Jong Chul Ye (KAIST AI)
More from the Same Authors
-
2022 : Progressive Deblurring of Diffusion Models for Coarse-to-Fine Image Synthesis »
Sangyun Lee · Hyungjin Chung · Jaehyeon Kim · Jong Chul Ye -
2023 : Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models »
Hyeonho Jeong · Jong Chul Ye -
2023 Workshop: NeurIPS 2023 Workshop on Diffusion Models »
Bahjat Kawar · Valentin De Bortoli · Charlotte Bunne · James Thornton · Jiaming Song · Jong Chul Ye · Chenlin Meng -
2023 Poster: Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models »
Geon Yeong Park · Jeongsol Kim · Beomsu Kim · Sang Wan Lee · Jong Chul Ye -
2023 Poster: Direct Diffusion Bridge using Data Consistency for Inverse Problems »
Hyungjin Chung · Jeongsol Kim · Jong Chul Ye -
2022 Panel: Panel 1B-4: Video PreTraining (VPT):… & Energy-Based Contrastive Learning… »
Beomsu Kim · Bowen Baker -
2022 Poster: Improving Diffusion Models for Inverse Problems using Manifold Constraints »
Hyungjin Chung · Byeongsu Sim · Dohoon Ryu · Jong Chul Ye -
2021 Poster: Noise2Score: Tweedie’s Approach to Self-Supervised Image Denoising without Clean Images »
Kwanyoung Kim · Jong Chul Ye -
2021 Poster: Federated Split Task-Agnostic Vision Transformer for COVID-19 CXR Diagnosis »
Sangjoon Park · Gwanghyun Kim · Jeongsol Kim · Boah Kim · Jong Chul Ye -
2021 Poster: Learning Dynamic Graph Representation of Brain Connectome with Spatio-Temporal Attention »
Byung-Hoon Kim · Jong Chul Ye · Jae-Jin Kim