Timezone: »
A prominent technique for self-supervised representation learning has been to contrast semantically similar and dissimilar pairs of samples. Without access to labels, dissimilar (negative) points are typically taken to be randomly sampled datapoints, implicitly accepting that these points may, in reality, actually have the same label. Perhaps unsurprisingly, we observe that sampling negative examples from truly different labels improves performance, in a synthetic setting where labels are available. Motivated by this observation, we develop a debiased contrastive objective that corrects for the sampling of same-label datapoints, even without knowledge of the true labels. Empirically, the proposed objective consistently outperforms the state-of-the-art for representation learning in vision, language, and reinforcement learning benchmarks. Theoretically, we establish generalization bounds for the downstream classification task.
Author Information
Ching-Yao Chuang (MIT)
Joshua Robinson (MIT)
Yen-Chen Lin (MIT)
Antonio Torralba (MIT)
Stefanie Jegelka (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: Debiased Contrastive Learning »
Tue. Dec 8th 05:00 -- 07:00 AM Room Poster Session 0
More from the Same Authors
-
2021 Spotlight: Measuring Generalization with Optimal Transport »
Ching-Yao Chuang · Youssef Mroueh · Kristjan Greenewald · Antonio Torralba · Stefanie Jegelka -
2021 Poster: Can contrastive learning avoid shortcut solutions? »
Joshua Robinson · Li Sun · Ke Yu · Kayhan Batmanghelich · Stefanie Jegelka · Suvrit Sra -
2021 Poster: Measuring Generalization with Optimal Transport »
Ching-Yao Chuang · Youssef Mroueh · Kristjan Greenewald · Antonio Torralba · Stefanie Jegelka -
2020 Poster: Adaptive Sampling for Stochastic Risk-Averse Learning »
Sebastian Curi · Kfir Y. Levy · Stefanie Jegelka · Andreas Krause -
2019 Workshop: Graph Representation Learning »
Will Hamilton · Rianne van den Berg · Michael Bronstein · Stefanie Jegelka · Thomas Kipf · Jure Leskovec · Renjie Liao · Yizhou Sun · Petar Veličković -
2019 Poster: Flexible Modeling of Diversity with Strongly Log-Concave Distributions »
Joshua Robinson · Suvrit Sra · Stefanie Jegelka -
2018 : Panel Discussion »
Antonio Torralba · Douwe Kiela · Barbara Landau · Angeliki Lazaridou · Joyce Chai · Christopher Manning · Stevan Harnad · Roozbeh Mottaghi -
2018 : Antonio Torralba - Learning to See and Hear »
Antonio Torralba -
2018 Poster: Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding »
Kexin Yi · Jiajun Wu · Chuang Gan · Antonio Torralba · Pushmeet Kohli · Josh Tenenbaum -
2018 Poster: 3D-Aware Scene Manipulation via Inverse Graphics »
Shunyu Yao · Tzu Ming Hsu · Jun-Yan Zhu · Jiajun Wu · Antonio Torralba · Bill Freeman · Josh Tenenbaum -
2018 Spotlight: Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding »
Kexin Yi · Jiajun Wu · Chuang Gan · Antonio Torralba · Pushmeet Kohli · Josh Tenenbaum -
2016 Poster: Generating Videos with Scene Dynamics »
Carl Vondrick · Hamed Pirsiavash · Antonio Torralba -
2016 Poster: SoundNet: Learning Sound Representations from Unlabeled Video »
Yusuf Aytar · Carl Vondrick · Antonio Torralba -
2015 Poster: Skip-Thought Vectors »
Jamie Kiros · Yukun Zhu · Russ Salakhutdinov · Richard Zemel · Raquel Urtasun · Antonio Torralba · Sanja Fidler -
2015 Poster: Where are they looking? »
Adria Recasens · Aditya Khosla · Carl Vondrick · Antonio Torralba -
2015 Spotlight: Where are they looking? »
Adria Recasens · Aditya Khosla · Carl Vondrick · Antonio Torralba -
2015 Poster: Learning visual biases from human imagination »
Carl Vondrick · Hamed Pirsiavash · Aude Oliva · Antonio Torralba