Timezone: »

 
Poster
Self-Adaptive Training: beyond Empirical Risk Minimization
Lang Huang · Chao Zhang · Hongyang Zhang

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1825

We propose self-adaptive training---a new training algorithm that dynamically calibrates training process by model predictions without incurring extra computational cost---to improve generalization of deep learning for potentially corrupted training data. This problem is important to robustly learning from data that are corrupted by, e.g., random noises and adversarial examples. The standard empirical risk minimization (ERM) for such data, however, may easily overfit noises and thus suffers from sub-optimal performance. In this paper, we observe that model predictions can substantially benefit the training process: self-adaptive training significantly mitigates the overfitting issue and improves generalization over ERM under both random and adversarial noises. Besides, in sharp contrast to the recently-discovered double-descent phenomenon in ERM, self-adaptive training exhibits a single-descent error-capacity curve, indicating that such a phenomenon might be a result of overfitting of noises. Experiments on the CIFAR and ImageNet datasets verify the effectiveness of our approach in two applications: classification with label noise and selective classification. The code is available at \url{https://github.com/LayneH/self-adaptive-training}.

Author Information

Lang Huang (Peking University)

I am currently a second-year Ph.D. student at the Department of Information & Communication Engineering, The University of Tokyo. Prior to that, I received a Master’s degree from the Department of Machine Intelligence, School of Electronics Engineering and Computer Science, Peking University in 2021. My research interests include self-supervised representation learning, robust learning from noisy data, and vision transformers.

Chao Zhang (Peking University)
Hongyang Zhang (TTIC)

More from the Same Authors