Timezone: »

 
Workshop
ImageNet: Past, Present, and Future
Zeynep Akata · Lucas Beyer · Sanghyuk Chun · A. Sophia Koepke · Diane Larlus · Seong Joon Oh · Rafael Rezende · Sangdoo Yun · Xiaohua Zhai

Mon Dec 13 04:00 AM -- 05:15 PM (PST) @
Event URL: https://sites.google.com/view/imagenet-workshop/ »

Since its release in 2010, ImageNet has played an instrumental role in the development of deep learning architectures for computer vision, enabling neural networks to greatly outperform hand-crafted visual representations. ImageNet also quickly became the go-to benchmark for model architectures and training techniques which eventually reach far beyond image classification. Today’s models are getting close to “solving” the benchmark. Models trained on ImageNet have been used as strong initialization for numerous downstream tasks. The ImageNet dataset has even been used for tasks going way beyond its initial purpose of training classification model. It has been leveraged and reinvented for tasks such as few-shot learning, self-supervised learning and semi-supervised learning. Interesting re-creation of the ImageNet benchmark enables the evaluation of novel challenges like robustness, bias, or concept generalization. More accurate labels have been provided. About 10 years later, ImageNet symbolizes a decade of staggering advances in computer vision, deep learning, and artificial intelligence.

We believe now is a good time to discuss what’s next: Did we solve ImageNet? What are the main lessons learnt thanks to this benchmark? What should the next generation of ImageNet-like benchmarks encompass? Is language supervision a promising alternative? How can we reflect on the diverse requirements for good datasets and models, such as fairness, privacy, security, generalization, scale, and efficiency?

Author Information

Zeynep Akata (University of Tübingen)
Lucas Beyer (Google Brain Zürich)
Sanghyuk Chun (NAVER AI Lab)

I'm a research scientist and tech leader at NAVER AI Lab, working on machine learning and its applications. In particular, my research interests focus on bridging the gap between two gigantic topics: reliable machine learning tasks (e.g., robustness [C3, C9, C10, W1, W3], de-biasing or domain generalization [C6, A6], uncertainty estimation [C11, A3], explainability [C5, C11, A2, A4, W2], and fair evaluation [C5, C11]) and learning with limited annotations (e.g., multi-modal learning [C11], weakly-supervised learning [C2, C3, C4, C5, C7, C8, C12, W2, W4, W5, W6, A2, A4], and self-supervised learning). I have contributed large-scale machine learning algorithms [C3, C9, C10, C13] in NAVER AI Lab as well. Prior to working at NAVER, I worked as a research engineer at the advanced recommendation team (ART) in Kakao from 2016 to 2018. I received a master's degree in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST) in 2016. During my master's degree, I researched a scalable algorithm for robust subspace clustering (the algorithm is based on robust PCA and k-means clustering). Before my master's study, I worked at IUM-SOCIUS in 2012 as a software engineering internship. I also did a research internship at Networked and Distributed Computing System Lab in KAIST and NAVER Labs during summer 2013 and fall 2015, respectively.

A. Sophia Koepke (University of Tübingen)
Diane Larlus (NAVER LABS Europe)
Seong Joon Oh (NAVER AI Lab)
Rafael Rezende (NAVER LABS EUROPE)
Sangdoo Yun (Clova AI Research, NAVER Corp.)
Xiaohua Zhai (Google Brain)

More from the Same Authors