Real-world data often exhibits long-tailed distributions with heavy class imbalance, posing great challenges for deep recognition models. We identify a persisting dilemma on the value of labels in the context of imbalanced learning: on the one hand, supervision from labels typically leads to better results than its unsupervised counterparts; on the other hand, heavily imbalanced data naturally incurs ''label bias'' in the classifier, where the decision boundary can be drastically altered by the majority classes. In this work, we systematically investigate these two facets of labels. We demonstrate, theoretically and empirically, that class-imbalanced learning can significantly benefit in both semi-supervised and self-supervised manners. Specifically, we confirm that (1) positively, imbalanced labels are valuable: given more unlabeled data, the original labels can be leveraged with the extra data to reduce label bias in a semi-supervised manner, which greatly improves the final classifier; (2) negatively however, we argue that imbalanced labels are not useful always: classifiers that are first pre-trained in a self-supervised manner consistently outperform their corresponding baselines. Extensive experiments on large-scale imbalanced datasets verify our theoretically grounded strategies, showing superior performance over previous state-of-the-arts. Our intriguing findings highlight the need to rethink the usage of imbalanced labels in realistic long-tailed tasks. Code is available at https://github.com/YyzHarry/imbalanced-semi-self.
Yuzhe Yang (MIT)
Zhi Xu (MIT)
More from the Same Authors
2021 Poster: PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Personalized Simulators »
Anish Agarwal · Abdullah Alomar · Varkey Alumootil · Devavrat Shah · Dennis Shen · Zhi Xu · Cindy Yang
2020 Poster: Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation »
Devavrat Shah · Dogyoon Song · Zhi Xu · Yuzhe Yang