Timezone: »

 
Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks
Curtis Northcutt · Anish Athalye · Jonas Mueller

Tue Dec 07 12:35 AM -- 12:45 AM (PST) @

We identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets, and subsequently study the potential for these label errors to affect benchmark results. Errors in test sets are numerous and widespread: we estimate an average of 3.3% errors across the 10 datasets, where for example 2916 label errors comprise 6% of the ImageNet validation set. Putative label errors are identified using confident learning algorithms and then human-validated via crowdsourcing (54% of the algorithmically-flagged candidates are indeed erroneously labeled). Traditionally, machine learning practitioners choose which model to deploy based on test accuracy — our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets. Surprisingly, we find that lower capacity models may be practically more useful than higher capacity models in real-world datasets with high proportions of erroneously labeled data. For example, on ImageNet with corrected labels: ResNet-18 outperforms ResNet-50 if the prevalence of originally mislabeled test examples increases by just 6%. On CIFAR-10 with corrected labels: VGG-11 outperforms VGG-19 if the prevalence of originally mislabeled test examples increases by just 5%.

Author Information

Curtis Northcutt (Cleanlab, ChipBrain, MIT)

Curtis Northcutt is a fifth-year Ph.D. Candidate in Computer Science and an MITx Digital Learning Research Fellow in The Office of Digital Learning at MIT, working under the supervision of Isaac Chuang. His work spans learning with mislabeled training data, semi-supervised and unsupervised learning, cheating detection, and online education. Curtis has won numerous awards, including the MIT Morris Joseph Levin Masters Thesis Award, an NSF Graduate Research Fellowship, the Barry M. Goldwater National Scholarship, and the Vanderbilt Founder’s Medal. Curtis created and is responsible for the cheating detection system used by MITx online course teams, particularly the MicroMasters courses. He has led or been a part of numerous research and industrial efforts and has worked or interned at Amazon Research (Alexa), Facebook AI Research (FAIR), Microsoft Research (MSR) India, MIT Lincoln Laboratory, Microsoft, NASA, General Electric, and a National Science Foundation REU including collaborations with MIT, Harvard, Vanderbilt, Notre Dame, and the University of Kentucky.

Anish Athalye (MIT CSAIL)
Jonas Mueller (Amazon Web Services)

More from the Same Authors