Timezone: »
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision rules, in particular when their training data is biased, i.e., when training labels are strongly correlated with undesirable features. To prevent a network from learning such features, recent methods augment training data such that examples displaying spurious correlations (i.e., bias-aligned examples) become a minority, whereas the other, bias-conflicting examples become prevalent. However, these approaches are sometimes difficult to train and scale to real-world data because they rely on generative models or disentangled representations. We propose an alternative based on mixup, a popular augmentation that creates convex combinations of training examples. Our method, coined SelecMix, applies mixup to contradicting pairs of examples, defined as showing either (i) the same label but dissimilar biased features, or (ii) different labels but similar biased features. Identifying such pairs requires comparing examples with respect to unknown biased features. For this, we utilize an auxiliary contrastive model with the popular heuristic that biased features are learned preferentially during training. Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.
Author Information
Inwoo Hwang (Seoul National University)
Sangjun Lee (Seoul National University)
Yunhyeok Kwak (Seoul National University)
Seong Joon Oh (University of Tübingen)
Damien Teney (Idiap Research Institute)
Jin-Hwa Kim (NAVER AI Lab)
Jin-Hwa Kim has been Technical Leader and Research Scientist at NAVER AI Lab since August 2021 and Guest Assistant Professor at Artificial Intelligence Institute of Seoul National University (SNU AIIS) since August 2022. He has been studying multimodal deep learning (e.g., [visual question answering](http://visualqa.org)), multimodal generation, ethical AI, and other related topics. In 2018, he received Ph.D. from Seoul National University under the supervision of Professor [Byoung-Tak Zhang](https://bi.snu.ac.kr/~btzhang/) for the work on "Multimodal Deep Learning for Visually-grounded Reasoning." In September 2017, he received [2017 Google Ph.D. Fellowship](https://ai.googleblog.com/2017/09/highlights-from-annual-google-phd.html) in Machine Learning, Ph.D. Completion Scholarship by Seoul National University, and the VQA Challenge 2018 runners-up at the [CVPR 2018 VQA Challenge and Visual Dialog Workshop](https://visualqa.org/workshop_2018.html). He was Research Intern at [Facebook AI Research](https://research.fb.com/category/facebook-ai-research/) (Menlo Park, CA) mentored by [Yuandong Tian](http://yuandong-tian.com), [Devi Parikh](https://www.cc.gatech.edu/~parikh/), and [Dhruv Batra](https://www.cc.gatech.edu/~dbatra/), from January to May in 2017. He had worked for SK Telecom (August 2018 to July 2021) and SK Communications (January 2011 to October 2012).
Byoung-Tak Zhang (Seoul National University)
More from the Same Authors
-
2021 : Partition-based Local Independence Discovery »
Inwoo Hwang · Byoung-Tak Zhang · Sanghack Lee -
2021 : C^3: Contrastive Learning for Cross-domain Correspondence in Few-shot Image Generation »
Hyukgi Lee · Gi-Cheon Kang · Chang-Hoon Jeong · Hanwool Sul · Byoung-Tak Zhang -
2022 Poster: Mutual Information Divergence: A Unified Metric for Multimodal Generative Models »
Jin-Hwa Kim · Yunji Kim · Jiyoung Lee · Kang Min Yoo · Sang-Woo Lee -
2022 Poster: Robust Imitation via Mirror Descent Inverse Reinforcement Learning »
Dong-Sig Han · Hyunseo Kim · Hyundo Lee · JeHwan Ryu · Byoung-Tak Zhang -
2022 Poster: Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty »
Jaehoon Oh · Sungnyun Kim · Namgyu Ho · Jin-Hwa Kim · Hwanjun Song · Se-Young Yun -
2021 Workshop: ImageNet: Past, Present, and Future »
Zeynep Akata · Lucas Beyer · Sanghyuk Chun · A. Sophia Koepke · Diane Larlus · Seong Joon Oh · Rafael Rezende · Sangdoo Yun · Xiaohua Zhai -
2021 Poster: Goal-Aware Cross-Entropy for Multi-Target Reinforcement Learning »
Kibeom Kim · Min Whoo Lee · Yoonsung Kim · JeHwan Ryu · Minsu Lee · Byoung-Tak Zhang -
2021 Poster: Neural Hybrid Automata: Learning Dynamics With Multiple Modes and Stochastic Transitions »
Michael Poli · Stefano Massaroli · Luca Scimeca · Sanghyuk Chun · Seong Joon Oh · Atsushi Yamashita · Hajime Asama · Jinkyoo Park · Animesh Garg -
2020 Workshop: BabyMind: How Babies Learn and How Machines Can Imitate »
Byoung-Tak Zhang · Gary Marcus · Angelo Cangelosi · Pia Knoeferle · Klaus Obermayer · David Vernon · Chen Yu -
2020 : Opening Remarks: BabyMind, Byoung-Tak Zhang and Gary Marcus »
Byoung-Tak Zhang · Gary Marcus -
2018 Poster: Answerer in Questioner's Mind: Information Theoretic Approach to Goal-Oriented Visual Dialog »
Sang-Woo Lee · Yu-Jung Heo · Byoung-Tak Zhang -
2018 Spotlight: Answerer in Questioner's Mind: Information Theoretic Approach to Goal-Oriented Visual Dialog »
Sang-Woo Lee · Yu-Jung Heo · Byoung-Tak Zhang -
2018 Poster: Bilinear Attention Networks »
Jin-Hwa Kim · Jaehyun Jun · Byoung-Tak Zhang -
2017 Poster: Overcoming Catastrophic Forgetting by Incremental Moment Matching »
Sang-Woo Lee · Jin-Hwa Kim · Jaehyun Jun · Jung-Woo Ha · Byoung-Tak Zhang -
2017 Spotlight: Overcoming Catastrophic Forgetting by Incremental Moment Matching »
Sang-Woo Lee · Jin-Hwa Kim · Jaehyun Jun · Jung-Woo Ha · Byoung-Tak Zhang -
2016 : PororoQA: Cartoon Video Series Dataset for Story Understanding »
KyungMin Kim · Min-Oh Heo · Byoung-Tak Zhang -
2016 Poster: Multimodal Residual Learning for Visual QA »
Jin-Hwa Kim · Sang-Woo Lee · Donghyun Kwak · Min-Oh Heo · Jeonghee Kim · Jung-Woo Ha · Byoung-Tak Zhang -
2010 Poster: Generative Local Metric Learning for Nearest Neighbor Classification »
Yung-Kyun Noh · Byoung-Tak Zhang · Daniel Lee