Skip to yearly menu bar Skip to main content


Poster

Supercharging Imbalanced Data Learning With Energy-based Contrastive Representation Transfer

Junya Chen · Zidi Xiu · Benjamin Goldstein · Ricardo Henao · Lawrence Carin · Chenyang Tao

Keywords: [ Vision ] [ Machine Learning ] [ Contrastive Learning ] [ Causality ] [ Representation Learning ]


Abstract:

Dealing with severe class imbalance poses a major challenge for many real-world applications, especially when the accurate classification and generalization of minority classes are of primary interest.In computer vision and NLP, learning from datasets with long-tail behavior is a recurring theme, especially for naturally occurring labels. Existing solutions mostly appeal to sampling or weighting adjustments to alleviate the extreme imbalance, or impose inductive bias to prioritize generalizable associations. Here we take a novel perspective to promote sample efficiency and model generalization based on the invariance principles of causality. Our contribution posits a meta-distributional scenario, where the causal generating mechanism for label-conditional features is invariant across different labels. Such causal assumption enables efficient knowledge transfer from the dominant classes to their under-represented counterparts, even if their feature distributions show apparent disparities. This allows us to leverage a causal data augmentation procedure to enlarge the representation of minority classes. Our development is orthogonal to the existing imbalanced data learning techniques thus can be seamlessly integrated. The proposed approach is validated on an extensive set of synthetic and real-world tasks against state-of-the-art solutions.

Chat is not available.