Timezone: »
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. The source code is available at https://github.com/Vanint/SADE-AgnosticLT.
Author Information
Yifan Zhang (National University of Singapore)
Bryan Hooi (National University of Singapore)
Lanqing Hong (Huawei Noah's Ark Lab)
Jiashi Feng (UC Berkeley)
More from the Same Authors
-
2021 : SODA10M: A Large-Scale 2D Self/Semi-Supervised Object Detection Dataset for Autonomous Driving »
Jianhua Han · Xiwen Liang · Hang Xu · Kai Chen · Lanqing Hong · Jiageng Mao · Chaoqiang Ye · Wei Zhang · Zhenguo Li · Xiaodan Liang · Chunjing XU -
2021 : How Well Does Self-Supervised Pre-Training Perform with Streaming ImageNet? »
Dapeng Hu · Shipeng Yan · Qizhengqiu Lu · Lanqing Hong · Hailin Hu · Yifan Zhang · Zhenguo Li · Jiashi Feng -
2021 : Architecture Personalization in Resource-constrained Federated Learning »
Mi Luo · Fei Chen · Zhenguo Li · Jiashi Feng -
2023 Poster: Proximity-Informed Calibration for Deep Neural Networks »
Miao Xiong · Ailin Deng · Pang Wei Koh · Jiaying Wu · Shen Li · Jianqing Xu · Bryan Hooi -
2023 Poster: XAGen: 3D Expressive Human Avatars Generation »
Eric Z. XU · Jianfeng Zhang · Jun Hao Liew · Jiashi Feng · Mike Zheng Shou -
2023 Poster: DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation »
Shentong Mo · Enze Xie · Ruihang Chu · Lanqing Hong · Matthias Niessner · Zhenguo Li -
2023 Poster: Expanding Small-Scale Datasets with Guided Imagination »
Yifan Zhang · Daquan Zhou · Bryan Hooi · Kai Wang · Jiashi Feng -
2023 Poster: LargeST: A Benchmark Dataset for Large-Scale Traffic Forecasting »
Xu Liu · Yutong Xia · Yuxuan Liang · Junfeng Hu · Yiwei Wang · LEI BAI · Chao Huang · Zhenguang Liu · Bryan Hooi · Roger Zimmermann -
2022 Spotlight: Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning »
Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 : KeyNote 2 by Bryan Hooi : Temporal Graph Learning: Some Challenges and Recent Directions »
Bryan Hooi -
2022 Poster: Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning »
Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang -
2022 Poster: MGNNI: Multiscale Graph Neural Networks with Implicit Layers »
Juncheng Liu · Bryan Hooi · Kenji Kawaguchi · Xiaokui Xiao -
2022 Poster: Sharpness-Aware Training for Free »
JIAWEI DU · Daquan Zhou · Jiashi Feng · Vincent Tan · Joey Tianyi Zhou -
2021 : Contributed Talk 3: Architecture Personalization in Resource-constrained Federated Learning »
Mi Luo · Fei Chen · Zhenguo Li · Jiashi Feng -
2021 Poster: No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data »
Mi Luo · Fei Chen · Dapeng Hu · Yifan Zhang · Jian Liang · Jiashi Feng -
2021 Poster: Adaptive Data Augmentation on Temporal Graphs »
Yiwei Wang · Yujun Cai · Yuxuan Liang · Henghui Ding · Changhu Wang · Siddharth Bhatia · Bryan Hooi -
2021 Poster: Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning »
Yifan Zhang · Bryan Hooi · Dapeng Hu · Jian Liang · Jiashi Feng -
2021 Poster: SSMF: Shifting Seasonal Matrix Factorization »
Koki Kawabata · Siddharth Bhatia · Rui Liu · Mohit Wadhwa · Bryan Hooi -
2021 Poster: EIGNN: Efficient Infinite-Depth Graph Neural Networks »
Juncheng Liu · Kenji Kawaguchi · Bryan Hooi · Yiwei Wang · Xiaokui Xiao -
2019 : Coffee Break + Poster Session I »
Wei-Hung Weng · Simon Kohl · Aiham Taleb · Arijit Patra · Khashayar Namdar · Matthias Perkonigg · Shizhan Gong · Abdullah-Al-Zubaer Imran · Amir Abdi · Ilja Manakov · Johannes C. Paetzold · Ben Glocker · Dushyant Sahoo · Shreyas Fadnavis · Karsten Roth · Xueqing Liu · Yifan Zhang · Alexander Preuhs · Fabian Eitel · Anusua Trivedi · Tomer Weiss · Darko Stern · Liset Vazquez Romaguera · Johannes Hofmanninger · Aakash Kaku · Oloruntobiloba Olatunji · Anastasia Razdaibiedina · Tao Zhang -
2019 Poster: Multi-marginal Wasserstein GAN »
Jiezhang Cao · Langyuan Mo · Yifan Zhang · Kui Jia · Chunhua Shen · Mingkui Tan