Timezone: »
The memorization effect of deep neural network (DNN) plays a pivotal role in many state-of-the-art label-noise learning methods. To exploit this property, the early stopping trick, which stops the optimization at the early stage of training, is usually adopted. Current methods generally decide the early stopping point by considering a DNN as a whole. However, a DNN can be considered as a composition of a series of layers, and we find that the latter layers in a DNN are much more sensitive to label noise, while their former counterparts are quite robust. Therefore, selecting a stopping point for the whole network may make different DNN layers antagonistically affect each other, thus degrading the final performance. In this paper, we propose to separate a DNN into different parts and progressively train them to address this problem. Instead of the early stopping which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs. During training, we progressively train the latter DNN layers by using a smaller number of epochs with the preceding layers fixed to counteract the impact of noisy labels. We term the proposed method as progressive early stopping (PES). Despite its simplicity, compared with the traditional early stopping, PES can help to obtain more promising and stable results. Furthermore, by combining PES with existing approaches on noisy label training, we achieve state-of-the-art performance on image classification benchmarks. The code is made public at https://github.com/tmllab/PES.
Author Information
Yingbin Bai (The University of Sydney)
Erkun Yang (Xidian University)
Bo Han (HKBU / RIKEN)
Yanhua Yang (Xidian University)
Jiatong Li (University of Technology Sydney)
Yinian Mao (Meituan)
Gang Niu (RIKEN)
Tongliang Liu (The University of Sydney)
More from the Same Authors
-
2021 Spotlight: TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation »
Haoang Chi · Feng Liu · Wenjing Yang · Long Lan · Tongliang Liu · Bo Han · William Cheung · James Kwok -
2021 : On the Role of Pre-training for Meta Few-Shot Learning »
Chia-You Chen · Hsuan-Tien Lin · Masashi Sugiyama · Gang Niu -
2021 Poster: Universal Semi-Supervised Learning »
Zhuo Huang · Chao Xue · Bo Han · Jian Yang · Chen Gong -
2021 Poster: Probabilistic Margins for Instance Reweighting in Adversarial Training »
qizhou wang · Feng Liu · Bo Han · Tongliang Liu · Chen Gong · Gang Niu · Mingyuan Zhou · Masashi Sugiyama -
2021 Poster: Instance-dependent Label-noise Learning under a Structural Causal Model »
Yu Yao · Tongliang Liu · Mingming Gong · Bo Han · Gang Niu · Kun Zhang -
2021 Poster: TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation »
Haoang Chi · Feng Liu · Wenjing Yang · Long Lan · Tongliang Liu · Bo Han · William Cheung · James Kwok -
2021 Poster: Confident Anchor-Induced Multi-Source Free Domain Adaptation »
Jiahua Dong · Zhen Fang · Anjin Liu · Gan Sun · Tongliang Liu -
2020 Poster: Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning »
Yu Yao · Tongliang Liu · Bo Han · Mingming Gong · Jiankang Deng · Gang Niu · Masashi Sugiyama -
2020 Poster: Part-dependent Label Noise: Towards Instance-dependent Label Noise »
Xiaobo Xia · Tongliang Liu · Bo Han · Nannan Wang · Mingming Gong · Haifeng Liu · Gang Niu · Dacheng Tao · Masashi Sugiyama -
2020 Spotlight: Part-dependent Label Noise: Towards Instance-dependent Label Noise »
Xiaobo Xia · Tongliang Liu · Bo Han · Nannan Wang · Mingming Gong · Haifeng Liu · Gang Niu · Dacheng Tao · Masashi Sugiyama -
2020 Poster: Rethinking Importance Weighting for Deep Learning under Distribution Shift »
Tongtong Fang · Nan Lu · Gang Niu · Masashi Sugiyama -
2020 Spotlight: Rethinking Importance Weighting for Deep Learning under Distribution Shift »
Tongtong Fang · Nan Lu · Gang Niu · Masashi Sugiyama -
2020 Poster: Provably Consistent Partial-Label Learning »
Lei Feng · Jiaqi Lv · Bo Han · Miao Xu · Gang Niu · Xin Geng · Bo An · Masashi Sugiyama -
2020 Poster: Domain Generalization via Entropy Regularization »
Shanshan Zhao · Mingming Gong · Tongliang Liu · Huan Fu · Dacheng Tao -
2019 : Poster Presentations »
Rahul Mehta · Andrew Lampinen · Binghong Chen · Sergio Pascual-Diaz · Jordi Grau-Moya · Aldo Faisal · Jonathan Tompson · Yiren Lu · Khimya Khetarpal · Martin Klissarov · Pierre-Luc Bacon · Doina Precup · Thanard Kurutach · Aviv Tamar · Pieter Abbeel · Jinke He · Maximilian Igl · Shimon Whiteson · Wendelin Boehmer · RaphaĆ«l Marinier · Olivier Pietquin · Karol Hausman · Sergey Levine · Chelsea Finn · Tianhe Yu · Lisa Lee · Benjamin Eysenbach · Emilio Parisotto · Eric Xing · Ruslan Salakhutdinov · Hongyu Ren · Anima Anandkumar · Deepak Pathak · Christopher Lu · Trevor Darrell · Alexei Efros · Phillip Isola · Feng Liu · Bo Han · Gang Niu · Masashi Sugiyama · Saurabh Kumar · Janith Petangoda · Johan Ferret · James McClelland · Kara Liu · Animesh Garg · Robert Lange -
2019 Poster: Uncoupled Regression from Pairwise Comparison Data »
Liyuan Xu · Junya Honda · Gang Niu · Masashi Sugiyama -
2019 Poster: Are Anchor Points Really Indispensable in Label-Noise Learning? »
Xiaobo Xia · Tongliang Liu · Nannan Wang · Bo Han · Chen Gong · Gang Niu · Masashi Sugiyama -
2019 Poster: Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence »
Fengxiang He · Tongliang Liu · Dacheng Tao -
2018 Poster: Binary Classification from Positive-Confidence Data »
Takashi Ishida · Gang Niu · Masashi Sugiyama -
2018 Spotlight: Binary Classification from Positive-Confidence Data »
Takashi Ishida · Gang Niu · Masashi Sugiyama -
2018 Poster: Masking: A New Perspective of Noisy Supervision »
Bo Han · Jiangchao Yao · Gang Niu · Mingyuan Zhou · Ivor Tsang · Ya Zhang · Masashi Sugiyama -
2018 Poster: Co-teaching: Robust training of deep neural networks with extremely noisy labels »
Bo Han · Quanming Yao · Xingrui Yu · Gang Niu · Miao Xu · Weihua Hu · Ivor Tsang · Masashi Sugiyama -
2017 Poster: Positive-Unlabeled Learning with Non-Negative Risk Estimator »
Ryuichi Kiryo · Gang Niu · Marthinus C du Plessis · Masashi Sugiyama -
2017 Poster: Learning from Complementary Labels »
Takashi Ishida · Gang Niu · Weihua Hu · Masashi Sugiyama -
2017 Oral: Positive-Unlabeled Learning with Non-Negative Risk Estimator »
Ryuichi Kiryo · Gang Niu · Marthinus C du Plessis · Masashi Sugiyama -
2016 Poster: Theoretical Comparisons of Positive-Unlabeled Learning against Positive-Negative Learning »
Gang Niu · Marthinus Christoffel du Plessis · Tomoya Sakai · Yao Ma · Masashi Sugiyama -
2014 Poster: Analysis of Learning from Positive and Unlabeled Data »
Marthinus C du Plessis · Gang Niu · Masashi Sugiyama -
2011 Poster: Analysis and Improvement of Policy Gradient Estimation »
Tingting Zhao · Hirotaka Hachiya · Gang Niu · Masashi Sugiyama