Timezone: »

Evaluating Machine Accuracy on ImageNet
Vaishaal Shankar · Becca Roelofs · Horia Mania · Benjamin Recht · Ludwig Schmidt
Event URL: http://proceedings.mlr.press/v119/shankar20c/shankar20c.pdf »

We evaluate a wide range of ImageNet models with five trained human labelers. In our year-long experiment, trained humans first annotated 40,000 images from the ImageNet and ImageNetV2 test sets with multi-class labels to enable a semantically coherent evaluation. Then we measured the classification accuracy of the five trained humans on the full task with 1,000 classes. Only the latest models from 2020 are on par with our best human labeler, and human accuracy on the 590 object classes is still 4% and 10% higher than the best model on ImageNet and ImageNetV2, respectively. Moreover, humans achieve the same accuracy on ImageNet and ImageNetV2, while all models see a consistent accuracy drop. Overall, our results show that there is still substantial room for improvement on ImageNet and direct accuracy comparisons between humans and machines may overstate machine performance.

Author Information

Vaishaal Shankar (UC Berkeley)
Becca Roelofs (Google Research)
Horia Mania (UC Berkeley)
Benjamin Recht (UC Berkeley)
Ludwig Schmidt (University of Washington)

More from the Same Authors