Timezone: »
By inferring latent groups in the training data, recent works introduce invariant learning to the case where environment annotations are unavailable. Typically, learning group invariance under a majority/minority split is empirically shown to be effective in improving out-of-distribution generalization on many datasets. However, theoretical guarantee for these methods on learning invariant mechanisms is lacking. In this paper, we reveal the insufficiency of existing group invariant learning methods in preventing classifiers from depending on spurious correlations in the training set. Specifically, we propose two criteria on judging such sufficiency. Theoretically and empirically, we show that existing methods can violate both criteria and thus fail in generalizing to spurious correlation shifts. Motivated by this, we design a new group invariant learning method, which constructs groups with statistical independence tests, and reweights samples by group label proportion to meet the criteria. Experiments on both synthetic and real data demonstrate that the new method significantly outperforms existing group invariant learning methods in generalizing to spurious correlation shifts.
Author Information
Yimeng Chen (Academy of Mathematics and Systems Science, Chinese Academy of Sciences)
Ruibin Xiong (Institude of Computing Technology, Chinese Academy of Sciences)
Zhi-Ming Ma
Yanyan Lan (Tsinghua University, Tsinghua University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: When Does Group Invariant Learning Survive Spurious Correlations? »
Thu. Dec 1st through Fri the 2nd Room Hall J #442
More from the Same Authors
-
2021 : Multi-modal Self-supervised Pre-training for Large-scale Genome Data »
Shentong Mo · Xi Fu · Chenyang Hong · Yizhen Chen · Yuxuan Zheng · Xiangru Tang · Yanyan Lan · Zhiqiang Shen · Eric Xing -
2022 Spotlight: Lightning Talks 4A-3 »
Zhihan Gao · Yabin Wang · Xingyu Qu · Luziwei Leng · Mingqing Xiao · Bohan Wang · Yu Shen · Zhiwu Huang · Xingjian Shi · Qi Meng · Yupeng Lu · Diyang Li · Qingyan Meng · Kaiwei Che · Yang Li · Hao Wang · Huishuai Zhang · Zongpeng Zhang · Kaixuan Zhang · Xiaopeng Hong · Xiaohan Zhao · Di He · Jianguo Zhang · Yaofeng Tu · Bin Gu · Yi Zhu · Ruoyu Sun · Yuyang (Bernie) Wang · Zhouchen Lin · Qinghu Meng · Wei Chen · Wentao Zhang · Bin CUI · Jie Cheng · Zhi-Ming Ma · Mu Li · Qinghai Guo · Dit-Yan Yeung · Tie-Yan Liu · Jianxing Liao -
2022 Spotlight: Does Momentum Change the Implicit Regularization on Separable Data? »
Bohan Wang · Qi Meng · Huishuai Zhang · Ruoyu Sun · Wei Chen · Zhi-Ming Ma · Tie-Yan Liu -
2022 Spotlight: Characterization of Excess Risk for Locally Strongly Convex Population Risk »
Mingyang Yi · Ruoyu Wang · Zhi-Ming Ma -
2022 Poster: Characterization of Excess Risk for Locally Strongly Convex Population Risk »
Mingyang Yi · Ruoyu Wang · Zhi-Ming Ma -
2022 Poster: Does Momentum Change the Implicit Regularization on Separable Data? »
Bohan Wang · Qi Meng · Huishuai Zhang · Ruoyu Sun · Wei Chen · Zhi-Ming Ma · Tie-Yan Liu -
2021 Poster: Uncertainty Calibration for Ensemble-Based Debiasing Methods »
Ruibin Xiong · Yimeng Chen · Liang Pang · Xueqi Cheng · Zhi-Ming Ma · Yanyan Lan -
2017 Poster: Finite sample analysis of the GTD Policy Evaluation Algorithms in Markov Setting »
Yue Wang · Wei Chen · Yuting Liu · Zhi-Ming Ma · Tie-Yan Liu -
2016 Poster: A Communication-Efficient Parallel Algorithm for Decision Tree »
Qi Meng · Guolin Ke · Taifeng Wang · Wei Chen · Qiwei Ye · Zhi-Ming Ma · Tie-Yan Liu -
2010 Poster: Two-Layer Generalization Analysis for Ranking Using Rademacher Average »
Wei Chen · Tie-Yan Liu · Zhi-Ming Ma -
2009 Poster: Ranking Measures and Loss Functions in Learning to Rank »
Wei Chen · Tie-Yan Liu · Yanyan Lan · Zhi-Ming Ma · Hang Li