The program includes a wide variety of exciting competitions in different domains, with some focusing more on applications and others trying to unify fields, focusing on technical challenges or directly tackling important problems in the world. The aim is for the broad program to make it so that anyone who wants to work on or learn from a competition can find something to their liking.
In this session, we have the following competitions:
* The Image Similarity Challenge
* Enhanced Zero-Resource Speech Challenge 2021: Language Modelling from Speech and Images
* The BEETL Competition: Benchmarks for EEG Transfer Learning
* Multimodal Single-Cell Data Integration
* The AI Driving Olympics
Fri 2:00 a.m. - 2:05 a.m.
|
Introduction to Competition Day 4
(
Intro
)
SlidesLive Video » |
Marco Ciccone 🔗 |
Fri 2:05 a.m. - 2:25 a.m.
|
Image Similarity Challenge + Q&A
(
Talk
)
link »
SlidesLive Video » Matching images by similarity consists in identifying the source of an altered image in a large collection of unrelated images. This technology is applied to a range of content moderation domains: misinformation, copyright infringement, scams, etc. In these domains, it has concrete and real-world impact to protect the integrity of persons engaging in social media. This challenge aims at compiling a dataset focused on image similarity in order to provide a benchmark of efforts from academic researchers and industrial actors. The participants will be provided with a reference collection of one million images and a set of query images. The query images are transformed versions of reference images. The transformations include various types of image edition, collages, and re-encoding. The participants are tasked with finding the source image from the dataset. Baseline methods include all techniques from the instance matching literature (keypoint matching, global descriptor extraction). The anticipated scientific impact is to bring back image similarity detection as an important and challenging task in the computer vision domain and refresh the state of the art. Participants could adopt, for example, recent approaches from self-supervised learning. |
Matthijs Douze · Zoe Papakipos · Cristian Canton · Lowik Chanussot · Giorgos Tolias · Filip Radenovic · Ondrej Chum 🔗 |
Fri 2:24 a.m. - 5:24 a.m.
|
Breakout: Image Similarity Challenge
(
Breakout session
)
|
🔗 |
Fri 2:25 a.m. - 2:45 a.m.
|
Enhanced Zero-Resource Speech Challenge 2021: Language Modelling from Speech and Images + Q&A
(
Talk
)
link »
SlidesLive Video » The Zero Resource Speech Challenge is a series that has been running since 2015, which aims to advance research in unsupervised training of speech and dialogue tools, with an application in speech technology for under-resourced languages. This year, we are running an "enhanced" version of the newest challenge task, language modelling from speech. This task asks participants to learn a sequential model that can assign probabilities to sequences---like a typical language model---but which must be trained, and operate, without any text. Assessing and improving on our ability to build such a model is critical to expanding applications such as speech recognition and machine translation to languages without textual resources. The "enhanced" version makes two modifications: it expands the call for submissions to the "high GPU budget" category, encouraging very large models in addition to the smaller, "lower-budget" ones experimented with up to now; and it includes a new, experimental "multi-modal" track, which allows participants to assess the performance of models that include images in training, in addition to audio. Baseline models are already prepared and evaluated for the high-budget and multi-modal settings. |
Ewan Dunbar · Alejandrina Cristia · Okko Räsänen · Bertrand Higy · Marvin Lavechin · Grzegorz Chrupała · Afra Alishahi · Chen Yu · Maureen De Seyssel · Tu Anh Nguyen · Mathieu Bernard · Nicolas Hamilakis · Emmanuel Dupoux
|
Fri 2:44 a.m. - 5:44 a.m.
|
Breakout: Enhanced Zero-Resource Speech Challenge 2021: Language Modelling from Speech and Images
(
Breakout session
)
Schedule (UTC/PST Timezones)
|
🔗 |
Fri 2:45 a.m. - 3:05 a.m.
|
The NeurIPS 2021 BEETL Competition: Benchmarks for EEG Transfer Learning + Q&A
(
Talk
)
link »
SlidesLive Video » The Benchmarks for EEG Transfer Learning (BEETL) is a competition that aims to stimulate the development of transfer and meta-learning algorithms applied to a prime example of what makes the use of biosignal data hard, EEG data. BEETL acts as a much-needed benchmark for domain adaptation algorithms in EEG decoding and provides a real-world stimulus goal for transfer learning and meta-learning developments for both academia and industry. Given the multitude of different EEG-based algorithms that exist, we offer two specific challenges: Task 1 is a cross-subject sleep stage decoding challenge reflecting the need for transfer learning in clinical diagnostics, and Task 2 is a cross-dataset motor imagery decoding challenge reflecting the need for transfer learning in human interfacing. |
Xiaoxi Wei · Vinay Jayaram · Sylvain Chevallier · Giulia Luise · Camille Jeunet · Moritz Grosse-Wentrup · Alexandre Gramfort · Aldo A Faisal 🔗 |
Fri 3:04 a.m. - 6:04 a.m.
|
Breakout: The NeurIPS 2021 BEETL Competition: Benchmarks for EEG Transfer Learning
(
Breakout session
)
|
🔗 |
Fri 3:05 a.m. - 3:25 a.m.
|
Multimodal Single-Cell Data Integration + Q&A
(
Talk
)
link »
SlidesLive Video » Scaling from a dozen cells a decade ago to millions of cells today, single-cell measurement technologies are driving a revolution in the life sciences. Recent advances make it possible to measure multiple high-dimensional modalities (e.g. DNA accessibility, RNA, and proteins) simultaneously in the same cell. This data provides, for the first time, a direct and comprehensive view into the layers of gene regulation that drive biological diversity and disease. In this competition, we present three critical tasks on multimodal single-cell data using public datasets and a first-of-its-kind multi-omics benchmarking dataset. Teams will predict one modality from another and learn representations of multiple modalities measured in the same cells. Progress will elucidate how a common genetic blueprint gives rise to distinct cell types and processes, as a foundation for improving human health. |
Daniel Burkhardt · Smita Krishnaswamy · Malte Luecken · Debora Marks · Angela Pisco · Bastian Rieck · Jian Tang · Alexander Tong · Fabian Theis · Guy Wolf 🔗 |
Fri 3:24 a.m. - 6:24 a.m.
|
Breakout: Multimodal Single-Cell Data Integration
(
Breakout session
)
|
🔗 |
Fri 3:25 a.m. - 3:45 a.m.
|
AI Driving Olympics + Q&A
(
Talk
)
link »
SlidesLive Video » The AI Driving Olympics (AI-DO) is a series of embodied intelligence competitions in the field of autonomous vehicles. The overall objective of AI-DO is to provide accessible mechanisms for benchmarking progress in autonomy applied to the task of autonomous driving. This edition of the AI-DO features three different leagues: (a) urban driving, based on the Duckietown platform; (b) advanced perception, based on the Motional nuScenes dataset; and (c) racing, based on the AWS Deepracer platform. Each league has several “challenges" with independent leaderboards. The urban driving and racing leagues include embodied tasks, where agents are deployed on physical robots in addition to simulation. |
Andrea Censi · Liam Paull · Jacopo Tani · Emilio Frazzoli · Holger Caesar · Matthew Walter · Andrea Daniele · Sahika Genc · Sharada Mohanty 🔗 |
Fri 3:44 a.m. - 6:44 a.m.
|
Breakout: AI Driving Olympics
(
Breakout session
)
Schedule (GMT Timezone)
|
🔗 |