Timezone: »
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets. Yet these datasets are time-consuming and labor-exhaustive to obtain on realistic tasks. To mitigate the requirement for labeled data, self-training is widely used in semi-supervised learning by iteratively assigning pseudo labels to unlabeled samples. Despite its popularity, self-training is well-believed to be unreliable and often leads to training instability. Our experimental studies further reveal that the bias in semi-supervised learning arises from both the problem itself and the inappropriate training with potentially incorrect pseudo labels, which accumulates the error in the iterative self-training process. To reduce the above bias, we propose Debiased Self-Training (DST). First, the generation and utilization of pseudo labels are decoupled by two parameter-independent classifier heads to avoid direct error accumulation. Second, we estimate the worst case of self-training bias, where the pseudo labeling function is accurate on labeled samples, yet makes as many mistakes as possible on unlabeled samples. We then adversarially optimize the representations to improve the quality of pseudo labels by avoiding the worst case. Extensive experiments justify that DST achieves an average improvement of 6.3% against state-of-the-art methods on standard semi-supervised learning benchmark datasets and 18.9% against FixMatch on 13 diverse tasks. Furthermore, DST can be seamlessly adapted to other self-training methods and help stabilize their training and balance performance across classes in both cases of training from scratch and finetuning from pre-trained models.
Author Information
Baixu Chen (Tsinghua University, Tsinghua University)
Junguang Jiang (Tsinghua University)
Ximei Wang (Tsinghua University, Tsinghua University)
Pengfei Wan (Kuaishou Technology)
Jianmin Wang
Mingsheng Long (Tsinghua University)
More from the Same Authors
-
2022 Poster: Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models »
Yang Shu · Zhangjie Cao · Ziyang Zhang · Jianmin Wang · Mingsheng Long -
2022 Poster: Supported Policy Optimization for Offline Reinforcement Learning »
Jialong Wu · Haixu Wu · Zihan Qiu · Jianmin Wang · Mingsheng Long -
2022 Poster: Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting »
Yong Liu · Haixu Wu · Jianmin Wang · Mingsheng Long -
2022 : Domain Adaptation: Theory, Algorithms, and Open Library »
Mingsheng Long -
2021 Poster: Cycle Self-Training for Domain Adaptation »
Hong Liu · Jianmin Wang · Mingsheng Long -
2021 Poster: Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting »
Haixu Wu · Jiehui Xu · Jianmin Wang · Mingsheng Long -
2020 Poster: Co-Tuning for Transfer Learning »
Kaichao You · Zhi Kou · Mingsheng Long · Jianmin Wang -
2020 Poster: Transferable Calibration with Lower Bias and Variance in Domain Adaptation »
Ximei Wang · Mingsheng Long · Jianmin Wang · Michael Jordan -
2020 Poster: Stochastic Normalization »
Zhi Kou · Kaichao You · Mingsheng Long · Jianmin Wang -
2020 Poster: Learning to Adapt to Evolving Domains »
Hong Liu · Mingsheng Long · Jianmin Wang · Yu Wang -
2019 Poster: Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning »
Xinyang Chen · Sinan Wang · Bo Fu · Mingsheng Long · Jianmin Wang -
2019 Poster: Transferable Normalization: Towards Improving Transferability of Deep Neural Networks »
Ximei Wang · Ying Jin · Mingsheng Long · Jianmin Wang · Michael Jordan -
2018 Poster: Conditional Adversarial Domain Adaptation »
Mingsheng Long · ZHANGJIE CAO · Jianmin Wang · Michael Jordan -
2018 Poster: Generalized Zero-Shot Learning with Deep Calibration Network »
Shichen Liu · Mingsheng Long · Jianmin Wang · Michael Jordan -
2017 Poster: PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs »
Yunbo Wang · Mingsheng Long · Jianmin Wang · Zhifeng Gao · Philip S Yu -
2017 Poster: Learning Multiple Tasks with Multilinear Relationship Networks »
Mingsheng Long · ZHANGJIE CAO · Jianmin Wang · Philip S Yu -
2016 Poster: Unsupervised Domain Adaptation with Residual Transfer Networks »
Mingsheng Long · Han Zhu · Jianmin Wang · Michael Jordan -
2015 Workshop: Transfer and Multi-Task Learning: Trends and New Perspectives »
Anastasia Pentina · Christoph Lampert · Sinno Jialin Pan · Mingsheng Long · Judy Hoffman · Baochen Sun · Kate Saenko