Timezone: »
As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample. While contrastive learning has yielded continuous advancements in sampling strategy and architecture design, it still remains two persistent defects: the interference of task-irrelevant information and sample inefficiency, which are related to the recurring existence of trivial constant solutions. From the perspective of dimensional analysis, we find out that the dimensional redundancy and dimensional confounder are the intrinsic issues behind the phenomena, and provide experimental evidence to support our viewpoint. We further propose a simple yet effective approach MetaMask, short for the dimensional Mask learned by Meta-learning, to learn representations against dimensional redundancy and confounder. MetaMask adopts the redundancy-reduction technique to tackle the dimensional redundancy issue and innovatively introduces a dimensional mask to reduce the gradient effects of specific dimensions containing the confounder, which is trained by employing a meta-learning paradigm with the objective of improving the performance of masked representations on a typical self-supervised task. We provide solid theoretical analyses to prove MetaMask can obtain tighter risk bounds for downstream classification compared to typical contrastive methods. Empirically, our method achieves state-of-the-art performance on various benchmarks.
Author Information
Jiangmeng Li (Institute of Software Chinese Academy of Sciences)
Wenwen Qiang (Institute of Software Chinese Academy of Sciences)
Yanan Zhang (University of the Chinese Academy of Sciences)
Wenyi Mo (Renmin University of China)
Changwen Zheng (Institute of Software, Chinese Academy of Sciences)
Bing Su (Renmin University of China)
Hui Xiong
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning »
Dates n/a. Room
More from the Same Authors
-
2022 Poster: Log-Polar Space Convolution Layers »
Bing Su · Ji-Rong Wen -
2022 Spotlight: SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders »
Gang Li · Heliang Zheng · Daqing Liu · Chaoyue Wang · Bing Su · Changwen Zheng -
2022 Spotlight: Lightning Talks 2B-3 »
Jie-Jing Shao · Jiangmeng Li · Jiashuo Liu · Zongbo Han · Tianyang Hu · Jiayun Wu · Wenwen Qiang · Jun WANG · Zhipeng Liang · Lan-Zhe Guo · Wenjia Wang · Yanan Zhang · Xiao-wen Yang · Fan Yang · Bo Li · Wenyi Mo · Zhenguo Li · Liu Liu · Peng Cui · Yu-Feng Li · Changwen Zheng · Lanqing Li · Yatao Bian · Bing Su · Hui Xiong · Peilin Zhao · Bingzhe Wu · Changqing Zhang · Jianhua Yao -
2022 Spotlight: Lightning Talks 2B-2 »
Chenjian Gao · Rui Ding · Lingzhi LI · Fan Yang · Xingting Yao · Jianxin Li · Bing Su · Zhen Shen · Tongda Xu · Shuai Zhang · Ji-Rong Wen · Lin Guo · Fanrong Li · Kehua Guo · Zhongshu Wang · Zhi Chen · Xiangyuan Zhu · Zitao Mo · Dailan He · Hui Xiong · Yan Wang · Zheng Wu · Wenbing Tao · Jian Cheng · Haoyi Zhou · Li Shen · Ping Tan · Liwei Wang · Hongwei Qin -
2022 Spotlight: Log-Polar Space Convolution Layers »
Bing Su · Ji-Rong Wen -
2022 Spotlight: AutoST: Towards the Universal Modeling of Spatio-temporal Sequences »
Jianxin Li · Shuai Zhang · Hui Xiong · Haoyi Zhou -
2022 Poster: SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders »
Gang Li · Heliang Zheng · Daqing Liu · Chaoyue Wang · Bing Su · Changwen Zheng -
2021 Poster: Discerning Decision-Making Process of Deep Neural Networks with Hierarchical Voting Transformation »
Ying Sun · Hengshu Zhu · Chuan Qin · Fuzhen Zhuang · Qing He · Hui Xiong -
2021 Poster: Topic Modeling Revisited: A Document Graph-based Neural Network Perspective »
Dazhong Shen · Chuan Qin · Chao Wang · Zheng Dong · Hengshu Zhu · Hui Xiong