Timezone: »
We ask whether recent progress on the ImageNet classification benchmark continues to represent meaningful generalization, or whether the community has started to overfit to the idiosyncrasies of its labeling procedure. We therefore develop a significantly more robust procedure for collecting human annotations of the ImageNet validation set. Using these new labels, we reassess the accuracy of recently proposed ImageNet classifiers, and find their gains to be substantially smaller than those reported on the original labels. Furthermore, we find the original ImageNet labels to no longer be the best predictors of this independently-collected set, indicating that their usefulness in evaluating vision models may be nearing an end. Nevertheless, we find our annotation procedure to have largely remedied the errors in the original labels, reinforcing ImageNet as a powerful benchmark for future research in visual recognition.
Author Information
Alexander Kolesnikov (Google Research, Brain team)
More from the Same Authors
-
2023 Poster: Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design »
Ibrahim Alabdulmohsin · Lucas Beyer · Alexander Kolesnikov · Xiaohua Zhai -
2022 Poster: UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes »
Alexander Kolesnikov · AndrĂ© Susano Pinto · Lucas Beyer · Xiaohua Zhai · Jeremiah Harmsen · Neil Houlsby -
2021 : Live panel: Did we solve ImageNet? »
Shibani Santurkar · Alexander Kolesnikov · Becca Roelofs