Timezone: »
Recent advances in contrastive representation learning over paired image-text data have led to models such as CLIP that achieve state-of-the-art performance for zero-shot classification and distributional robustness. Such models typically require joint reasoning in the image and text representation spaces for downstream inference tasks. Contrary to prior beliefs, we demonstrate that the image and text representations learned via a standard contrastive objective are not interchangeable and can lead to inconsistent downstream predictions. To mitigate this issue, we formalize consistency and propose CyCLIP, a framework for contrastive representation learning that explicitly optimizes for the learned representations to be geometrically consistent in the image and text space. In particular, we show that consistent representations can be learned by explicitly symmetrizing (a) the similarity between the two mismatched image-text pairs (cross-modal consistency); and (b) the similarity between the image-image pair and the text-text pair (in-modal consistency). Empirically, we show that the improved consistency in CyCLIP translates to significant gains over CLIP, with gains ranging from 10%-24% for zero-shot classification on standard benchmarks (CIFAR-10, CIFAR-100, ImageNet1K) and 10%-27% for robustness to various natural distribution shifts.
Author Information
Shashank Goel (University of California, Los Angeles)
Hritik Bansal (University of California, Los Angeles (UCLA))
Sumit Bhatia (MDSR Lab, Adobe Systems)
Ryan Rossi (Purdue University)
Vishwa Vinay (Adobe Research)
Aditya Grover (University of California, Los Angeles)
More from the Same Authors
-
2022 : Conditioned Spatial Downscaling of Climate Variables »
Alex Hung · Evan Becker · Ted Zadouri · Aditya Grover -
2022 : Short-range forecasts of global precipitation using deep learning-augmented numerical weather prediction »
Manmeet Singh · Vaisakh SB · Nachiketa Acharya · Aditya Grover · Suryachandra A. Rao · Bipin Kumar · Zong-Liang Yang · Dev Niyogi -
2022 : Machine Learning for Predicting Climate Extremes »
Hritik Bansal · Shashank Goel · Tung Nguyen · Aditya Grover -
2022 : Using Informative Data Subsets for Efficient Training of Large Language Models: An Initial Study »
H S V N S Kowndinya Renduchintala · Krishnateja Killamsetty · Sumit Bhatia · Milan Aggarwal · Ganesh Ramakrishnan · Rishabh Iyer -
2022 : Pareto-Efficient Decision Agents for Offline Multi-Objective Reinforcement Learning »
Baiting Zhu · Meihua Dang · Aditya Grover -
2022 : Generative Pretraining for Black-Box Optimization »
Siddarth Krishnamoorthy · Satvik Mashkaria · Aditya Grover -
2022 : ConserWeightive Behavioral Cloning for Reliable Offline Reinforcement Learning »
Tung Nguyen · Qinqing Zheng · Aditya Grover -
2022 : Pareto-Efficient Decision Agents for Offline Multi-Objective Reinforcement Learning »
Baiting Zhu · Meihua Dang · Aditya Grover -
2022 : Machine Learning for Predicting Climate Extremes »
Hritik Bansal · Shashank Goel · Tung Nguyen · Aditya Grover -
2022 Poster: Masked Autoencoding for Scalable and Generalizable Decision Making »
Fangchen Liu · Hao Liu · Aditya Grover · Pieter Abbeel -
2021 Poster: Automatic Unsupervised Outlier Model Selection »
Yue Zhao · Ryan Rossi · Leman Akoglu