Timezone: »
We identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets, and subsequently study the potential for these label errors to affect benchmark results. Errors in test sets are numerous and widespread: we estimate an average of 3.3% errors across the 10 datasets, where for example 2916 label errors comprise 6% of the ImageNet validation set. Putative label errors are identified using confident learning algorithms and then human-validated via crowdsourcing (54% of the algorithmically-flagged candidates are indeed erroneously labeled). Traditionally, machine learning practitioners choose which model to deploy based on test accuracy — our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets. Surprisingly, we find that lower capacity models may be practically more useful than higher capacity models in real-world datasets with high proportions of erroneously labeled data. For example, on ImageNet with corrected labels: ResNet-18 outperforms ResNet-50 if the prevalence of originally mislabeled test examples increases by just 6%. On CIFAR-10 with corrected labels: VGG-11 outperforms VGG-19 if the prevalence of originally mislabeled test examples increases by just 5%.
Author Information
Curtis Northcutt (Cleanlab, ChipBrain, MIT)
Curtis Northcutt is a fifth-year Ph.D. Candidate in Computer Science and an MITx Digital Learning Research Fellow in The Office of Digital Learning at MIT, working under the supervision of Isaac Chuang. His work spans learning with mislabeled training data, semi-supervised and unsupervised learning, cheating detection, and online education. Curtis has won numerous awards, including the MIT Morris Joseph Levin Masters Thesis Award, an NSF Graduate Research Fellowship, the Barry M. Goldwater National Scholarship, and the Vanderbilt Founder’s Medal. Curtis created and is responsible for the cheating detection system used by MITx online course teams, particularly the MicroMasters courses. He has led or been a part of numerous research and industrial efforts and has worked or interned at Amazon Research (Alexa), Facebook AI Research (FAIR), Microsoft Research (MSR) India, MIT Lincoln Laboratory, Microsoft, NASA, General Electric, and a National Science Foundation REU including collaborations with MIT, Harvard, Vanderbilt, Notre Dame, and the University of Kentucky.
Anish Athalye (MIT CSAIL)
Jonas Mueller (Amazon Web Services)
More from the Same Authors
-
2021 : Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks »
Curtis Northcutt · Anish Athalye · Jonas Mueller -
2021 : Benchmarking Multimodal AutoML for Tabular Data with Text Fields »
Xingjian Shi · Jonas Mueller · Nick Erickson · Mu Li · Alexander Smola -
2021 : Robust Reinforcement Learning for Shifting Dynamics During Deployment »
Samuel Stanton · Rasool Fakoor · Jonas Mueller · Andrew Gordon Wilson · Alexander Smola -
2022 : Utilizing supervised models to infer consensus labels and their quality from data with multiple annotators »
Hui Wen Goh · Ulyana Tkachenko · Jonas Mueller -
2022 Poster: Adaptive Interest for Emphatic Reinforcement Learning »
Martin Klissarov · Rasool Fakoor · Jonas Mueller · Kavosh Asadi · Taesup Kim · Alexander Smola -
2021 : Curtis Northcutt »
Curtis Northcutt -
2021 Poster: Continuous Doubly Constrained Batch Reinforcement Learning »
Rasool Fakoor · Jonas Mueller · Kavosh Asadi · Pratik Chaudhari · Alexander Smola -
2021 Poster: Deep Extended Hazard Models for Survival Analysis »
Qixian Zhong · Jonas Mueller · Jane-Ling Wang -
2021 Poster: Overinterpretation reveals image classification model pathologies »
Brandon Carter · Siddhartha Jain · Jonas Mueller · David Gifford -
2020 Poster: Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation »
Rasool Fakoor · Jonas Mueller · Nick Erickson · Pratik Chaudhari · Alexander Smola -
2018 : Accepted papers »
Sven Gowal · Bogdan Kulynych · Marius Mosbach · Nicholas Frosst · Phil Roth · Utku Ozbulak · Simral Chaudhary · Toshiki Shibahara · Salome Viljoen · Nikita Samarin · Briland Hitaj · Rohan Taori · Emanuel Moss · Melody Guan · Lukas Schott · Angus Galloway · Anna Golubeva · Xiaomeng Jin · Felix Kreuk · Akshayvarun Subramanya · Vipin Pillai · Hamed Pirsiavash · Giuseppe Ateniese · Ankita Kalra · Logan Engstrom · Anish Athalye -
2017 : Synthesizing Robust Adversarial Examples »
Andrew Ilyas · Anish Athalye · Logan Engstrom · Kevin Kwok -
2016 : Contributed Talk 1: Learning Optimal Interventions »
Jonas Mueller -
2015 Poster: Principal Differences Analysis: Interpretable Characterization of Differences between Distributions »
Jonas Mueller · Tommi Jaakkola