Timezone: »
The program includes a wide variety of exciting competitions in different domains, with some focusing more on applications and others trying to unify fields, focusing on technical challenges or directly tackling important problems in the world. The aim is for the broad program to make it so that anyone who wants to work on or learn from a competition can find something to their liking.
In this session, we have the following competitions:
* Learning By Doing: Controlling a Dynamical System using Control Theory, Reinforcement Learning, or Causality
* Reconnaissance Blind Chess
* Real Robot Challenge II
* The Billion-Scale Approximate Nearest Neighbor Search Challenge
* MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains
Wed 2:00 a.m. - 2:05 a.m.
|
Introduction to Competition Day 2
(
Intro
)
|
Barbara Caputo 🔗 |
Wed 2:05 a.m. - 2:25 a.m.
|
Learning By Doing: Controlling a Dynamical System using Control Theory, Reinforcement Learning, or Causality + Q&A
(
Talk
)
link »
SlidesLive Video » Control theory, reinforcement learning, and causality are all ways of mathematically describing how the world changes when we interact with it. Each field offers a different perspective with its own strengths and weaknesses. In this competition, we aim to bring together researchers from all three fields to encourage cross-disciplinary discussions. The competition is constructed to readily fit into the mathematical frameworks of all three fields and participants of any background are encouraged to participate. We designed two tracks that consider a dynamical system for which participants need to find controls/policies to optimally interact with a target process: an open loop/bandit track and a closed loop/online RL track. |
Sebastian Weichwald · Niklas Pfister · Dominik Baumann · Isabelle Guyon · Oliver Kroemer · Tabitha Lee · Søren Wengel Mogensen · Jonas Peters · Sebastian Trimpe 🔗 |
Wed 2:24 a.m. - 5:24 a.m.
|
Breakout: Learning By Doing: Controlling a Dynamical System using Control Theory, Reinforcement Learning, or Causality
(
Breakout session
)
Schedule (GMT Timezone)
|
🔗 |
Wed 2:25 a.m. - 2:45 a.m.
|
Reconnaissance Blind Chess + Q&A
(
Talk
)
link »
SlidesLive Video » Reconnaissance Blind Chess is like chess except a player cannot see her opponent's pieces in general. Rather, each player chooses a 3x3 square of the board to privately observe each turn. Algorithms used to create agents for previous games like chess, Go, and poker break down in Reconnaissance Blind Chess for several reasons including the imperfect information, absence of obvious abstractions, and lack of common knowledge. In addition to this NeurIPS competition, the game is recently part of the new Hidden Information Games Competition (HIGC) that is organized with the AAAI Reinforcement Learning in Games workshop (2022). Build the best bot for this challenge in making strong decisions in multi-agent scenarios in the face of uncertainty. |
Ryan Gardner · Gino Perrotta · Corey Lowman · Casey Richardson · Andrew Newman · Jared Markowitz · Nathan Drenkow · Bart Paulhamus · Ashley J Llorens · Todd Neller · Raman Arora · Bo Li · Mykel J Kochenderfer
|
Wed 2:44 a.m. - 5:44 a.m.
|
Breakout: Reconnaissance Blind Chess
(
Breakout session
)
|
🔗 |
Wed 2:45 a.m. - 3:05 a.m.
|
Real Robot Challenge II + Q&A
(
Talk
)
link »
SlidesLive Video » Despite recent successes of reinforcement learning (RL) in simulated environments, deploying or training algorithms in the real-world remains a challenge due to the significant cost of experimentation and limited datasets. While insights gained in simulation do not necessarily translate to real robots, we aim to close the gap between simulation and the real-world by offering participants the opportunity to submit their algorithm to a robotics benchmark in the cloud. This will allow teams to gather hundreds of hours of real robot data with minimal effort and submission to our cloud benchmark is as easy as using a simulator. Simulators, easy to use interfaces and large real-world datasets for pretraining are available. Show that your algorithm is practical by solving the tasks on different levels in the real-world and win prizes! |
Stefan Bauer · Joel Akpo · Manuel Wuethrich · Nan Rosemary Ke · Anirudh Goyal · Thomas Steinbrenner · Felix Widmaier · Annika Buchholz · Bernhard Schölkopf · Dieter Büchler · Ludovic Righetti · Franziska Meier
|
Wed 3:04 a.m. - 6:04 a.m.
|
Breakout: Real Robot Challenge II
(
Breakout session
)
|
🔗 |
Wed 3:05 a.m. - 3:25 a.m.
|
Billion-Scale Approximate Nearest Neighbor Search Challenge + Q&A
(
Talk
)
link »
SlidesLive Video » Approximate Nearest Neighbor Search (ANNS) amounts to finding nearby points to a given query point in a high-dimensional vector space. ANNS algorithms optimize a tradeoff between search speed, memory usage and accuracy with respect to an exact sequential search. Thanks to efforts like ann-benchmarks.com, the state of the art for ANNS on million-scale datasets is quite clear. This competition aims at pushing the scale to out-of-memory billion-scale datasets and other hardware configurations that are realistic in many current applications. The competition uses six representative billion-scale datasets -- many newly released for this competition -- with their associated accuracy metrics. There are three tracks depending on hardware settings: (T1) limited memory (T2) limited main memory + SSD (T3) any hardware configuration including accelerators and custom silicon. We will use two recent indexing algorithms, DiskANN and FAISS, as baselines for tracks T1 and T2. The anticipated impact is an understanding of the ideas that apply at a billion-point scale, bridging communities that work on ANNS problems, and a platform for newer researchers to contribute and develop this relatively new research area. We will provide Azure cloud compute credit to participants with promising ideas without necessary infrastructure to develop their submissions. |
Harsha Vardhan Simhadri · George Williams · Martin Aumüller · Artem Babenko · Dmitry Baranchuk · Qi Chen · Matthijs Douze · Ravishankar Krishnawamy · Gopal Srinivasa · Suhas Jayaram Subramanya · Jingdong Wang
|
Wed 3:24 a.m. - 6:24 a.m.
|
Breakout: Billion-Scale Approximate Nearest Neighbor Search Challenge
(
Breakout session
)
Schedule (GMT Timezone)
|
🔗 |
Wed 3:25 a.m. - 3:45 a.m.
|
MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains + Q&A
(
Talk
)
link »
SlidesLive Video » Meta-learning is an important machine learning paradigm leveraging experience from previous tasks to make better predictions on the task at hand. This competition focuses on supervised learning, and more particularly `few shot learning' classification settings, aiming at learning a good model from very few examples, typically 1 to 5 per class. A starting kit will be provided, consisting of a public dataset and various baseline implementations, including MAML (Finn et al., 2017) and Prototypical Networks (Snell et al., 2017). This way, it should be easy to get started and build upon the various resources in the field. The competition consists of novel datasets from various domains, including healthcare, ecology, biology, and chemistry. The competition will consist of three phases: a public phase, a feedback phase, and a final phase. The last two phases will be run with code submissions, fully bind-tested on the Codalab challenge platform. A single (final) submission will be evaluated during the final phase, using five fresh datasets, currently unknown to the meta-learning community. |
Adrian El Baz · Isabelle Guyon · Zhengying Liu · Jan N. Van Rijn · Haozhe Sun · Sébastien Treguer · Wei-Wei Tu · Ihsan Ullah · Joaquin Vanschoren · Phan Ahn Vu 🔗 |
Wed 3:44 a.m. - 6:44 a.m.
|
Breakout: MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains
(
Breakout session
)
Schedule (GMT Timezone)
DL 2.0: How Meta-Learning May Power the Next Generation of Deep Learning Deep Learning (DL) has been incredibly successful, due to its ability to automatically acquire useful representations from raw data by a joint optimization process of all layers. However, current DL practice still requires substantial manual efforts to define the right neural architecture and training hyperparameters to optimally learn these representations for the data at hand. The next logical step is to jointly optimize these components as well, based on a meta-level of learning and optimization. In this talk, I will discuss several advances towards this goal, focusing on (1) joint optimization of several meta-choices in the DL pipeline, (2) efficiency of this meta-optimization, and (3) optimization of uncertainty estimates and robustness to data shift.
MetaDelta++: Improve Generalization of Few-shot System Through Multi-Scale Pretrained Models and Improved Training Strategies Meta-learning aims at learning quickly on novel tasks with limited data by transferring generic experience learned from previous tasks. Naturally, few-shot learning has been one of the most popular applications for meta-learning. Recently, an ensembled few-shot system MetaDelta is proposed to boost the performance, which won first place in the AAAI 2021 MetaDL challenge with leading performance. However, the generalization ability of MetaDelta is still limited by the homogeneous model setting and weak pretraining and fine-tuning strategies, hindering MetaDelta from being applied to more diverse scenarios and problems. We further boost the performance and generalization ability of MetaDelta by leveraging pre-trained models at multi-scale and improved training strategies, including semi-weakly supervised pretraining, data augmentation, separated learning rate at each layer, lazier BN statistics update, and better decoder design. Our system MetaDelta++ substantially boosts the performance and generalization abilities by a large margin and stands the 1st place in phase 1 of the NeurIPS 2021 MetaDL system with a large margin compared to MetaDelta and other teams.
In this slot, we will reflect on some latest developments in Meta-learning. We will present several frameworks that capture the relation between various research directions in meta-learning and AutoML. More specifically, we will reflect on the role of meta-learning in the broader context of machine learning, and on the role of learning curves in AutoML.
|
🔗 |
Author Information
Douwe Kiela (Facebook AI Research)
Marco Ciccone (Politecnico di Torino)

Marco Ciccone is an ELLIS Postdoctoral Researcher in the VANDAL group at Politecnico di Torino and UCL. His current research interests are in the intersection of meta, continual, and federated learning with a particular focus on modularity and models re-use to scale the training of agents with heterogeneous data and mitigate the effect of catastrophic forgetting and interference across tasks, domains, and devices. He has been NeurIPS Competiton Track co-chair in 2021, 2022 and 2023.
Barbara Caputo (Politecnico di Torino)
More from the Same Authors
-
2021 : Public Information Representation for Adversarial Team Games »
Luca Carminati · Federico Cacciamani · Marco Ciccone · Nicola Gatti -
2022 : Perturbation Augmentation for Fairer NLP »
Rebecca Qian · Candace Ross · Jude Fernandes · Eric Michael Smith · Douwe Kiela · Adina Williams -
2023 Poster: OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents »
Hugo Laurençon · Lucile Saulnier · Leo Tronchon · Stas Bekman · Amanpreet Singh · Anton Lozhkov · Thomas Wang · Siddharth Karamcheti · Alexander Rush · Douwe Kiela · Matthieu Cord · Victor Sanh -
2023 Poster: DataPerf: Benchmarks for Data-Centric AI Development »
Mark Mazumder · Colby Banbury · Xiaozhe Yao · Bojan Karlaš · William Gaviria Rojas · Sudnya Diamos · Greg Diamos · Lynn He · Alicia Parrish · Hannah Rose Kirk · Jessica Quaye · Charvi Rastogi · Douwe Kiela · David Jurado · David Kanter · Rafael Mosquera · Will Cukierski · Juan Ciro · Lora Aroyo · Bilge Acun · Lingjiao Chen · Mehul Raje · Max Bartolo · Evan Sabri Eyuboglu · Amirata Ghorbani · Emmett Goodman · Addison Howard · Oana Inel · Tariq Kane · Christine R. Kirkpatrick · D. Sculley · Tzu-Sheng Kuo · Jonas Mueller · Tristan Thrush · Joaquin Vanschoren · Margaret Warren · Adina Williams · Serena Yeung · Newsha Ardalani · Praveen Paritosh · Ce Zhang · James Zou · Carole-Jean Wu · Cody Coleman · Andrew Ng · Peter Mattson · Vijay Janapa Reddi -
2022 Workshop: Human Evaluation of Generative Models »
Divyansh Kaushik · Jennifer Hsia · Jessica Huynh · Yonadav Shavit · Samuel Bowman · Ting-Hao Huang · Douwe Kiela · Zachary Lipton · Eric Michael Smith -
2022 Competition: NeurIPS 2022 Competition Track: Overview & Results »
Marco Ciccone · Gustavo Stolovitzky · Jake Albrecht -
2021 : Spotlight Talk: Public Information Representation for Adversarial Team Games »
Luca Carminati · Federico Cacciamani · Marco Ciccone · Nicola Gatti -
2021 : Facebook - Data Centric Infrastructure »
Douwe Kiela -
2021 Demonstration: Demonstrations 4 »
Douwe Kiela · Barbara Caputo · Marco Ciccone -
2021 : Intro »
Marco Ciccone -
2021 : Introduction to Competition Day 4 »
Marco Ciccone -
2021 Competition: Competition Track Day 4: Overviews + Breakout Sessions »
Douwe Kiela · Marco Ciccone · Barbara Caputo -
2021 Poster: True Few-Shot Learning with Language Models »
Ethan Perez · Douwe Kiela · Kyunghyun Cho -
2021 : Invited talk - Douwe Kiela »
Douwe Kiela -
2021 : Introduction to Competition Day 3 »
Marco Ciccone -
2021 Competition: Competition Track Day 3: Overviews + Breakout Sessions »
Douwe Kiela · Marco Ciccone · Barbara Caputo -
2021 Demonstration: Demonstrations 3 »
Douwe Kiela · Barbara Caputo · Marco Ciccone -
2021 : Intro »
Marco Ciccone -
2021 Poster: Dynaboard: An Evaluation-As-A-Service Platform for Holistic Next-Generation Benchmarking »
Zhiyi Ma · Kawin Ethayarajh · Tristan Thrush · Somya Jain · Ledell Wu · Robin Jia · Christopher Potts · Adina Williams · Douwe Kiela -
2021 Demonstration: Demonstrations 2 »
Douwe Kiela · Barbara Caputo · Marco Ciccone -
2021 : Intro »
Douwe Kiela -
2021 : Introduction to Competition Day 2 »
Barbara Caputo -
2021 Competition: Competition Track Day 1: Overviews + Breakout Sessions »
Douwe Kiela · Marco Ciccone · Barbara Caputo -
2021 : Introduction Competion Day 1 »
Douwe Kiela -
2021 Poster: Human-Adversarial Visual Question Answering »
Sasha Sheng · Amanpreet Singh · Vedanuj Goswami · Jose Magana · Tristan Thrush · Wojciech Galuba · Devi Parikh · Douwe Kiela -
2021 Demonstration: Demonstrations 1 »
Douwe Kiela · Barbara Caputo · Marco Ciccone -
2021 : Introduction »
Douwe Kiela -
2020 : Q & A and Panel Session with Dan Weld, Kristen Grauman, Scott Yih, Emma Brunskill, and Alex Ratner »
Kristen Grauman · Wen-tau Yih · Alexander Ratner · Emma Brunskill · Douwe Kiela · Daniel S. Weld -
2020 Workshop: HAMLETS: Human And Model in the Loop Evaluation and Training Strategies »
Divyansh Kaushik · Bhargavi Paranjape · Forough Arabshahi · Yanai Elazar · Yixin Nie · Max Bartolo · Polina Kirichenko · Pontus Lars Erik Saito Stenetorp · Mohit Bansal · Zachary Lipton · Douwe Kiela -
2020 : Opening Remarks »
Divyansh Kaushik · Bhargavi Paranjape · Douwe Kiela -
2020 : The Hateful Memes Challenge: Live award ceremony and winner presentations »
Douwe Kiela -
2020 : The Hateful Memes Challenge: Competition Overview »
Douwe Kiela -
2020 Poster: The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes »
Douwe Kiela · Hamed Firooz · Aravind Mohan · Vedanuj Goswami · Amanpreet Singh · Pratik Ringshia · Davide Testuggine -
2020 Poster: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks »
Patrick Lewis · Ethan Perez · Aleksandra Piktus · Fabio Petroni · Vladimir Karpukhin · Naman Goyal · Heinrich Küttler · Mike Lewis · Wen-tau Yih · Tim Rocktäschel · Sebastian Riedel · Douwe Kiela -
2020 Poster: Learning Optimal Representations with the Decodable Information Bottleneck »
Yann Dubois · Douwe Kiela · David Schwab · Ramakrishna Vedantam -
2020 Spotlight: Learning Optimal Representations with the Decodable Information Bottleneck »
Yann Dubois · Douwe Kiela · David Schwab · Ramakrishna Vedantam -
2019 : Audrey Durand, Douwe Kiela, Kamalika Chaudhuri moderated by Yann Dauphin »
Audrey Durand · Kamalika Chaudhuri · Yann Dauphin · Orhan Firat · Dilan Gorur · Douwe Kiela -
2019 : Douwe Kiela - Benchmarking Progress in AI: A New Benchmark for Natural Language Understanding »
Douwe Kiela -
2019 Workshop: Emergent Communication: Towards Natural Language »
Abhinav Gupta · Michael Noukhovitch · Cinjon Resnick · Natasha Jaques · Angelos Filos · Marie Ossenkopf · Angeliki Lazaridou · Jakob Foerster · Ryan Lowe · Douwe Kiela · Kyunghyun Cho -
2019 Poster: Hyperbolic Graph Neural Networks »
Qi Liu · Maximilian Nickel · Douwe Kiela -
2018 Workshop: Emergent Communication Workshop »
Jakob Foerster · Angeliki Lazaridou · Ryan Lowe · Igor Mordatch · Douwe Kiela · Kyunghyun Cho -
2018 : Panel Discussion »
Antonio Torralba · Douwe Kiela · Barbara Landau · Angeliki Lazaridou · Joyce Chai · Christopher Manning · Stevan Harnad · Roozbeh Mottaghi -
2018 : Douwe Kiela - Learning Multimodal Embeddings »
Douwe Kiela -
2018 Poster: NAIS-Net: Stable Deep Networks from Non-Autonomous Differential Equations »
Marco Ciccone · Marco Gallieri · Jonathan Masci · Christian Osendorfer · Faustino Gomez -
2017 Workshop: Emergent Communication Workshop »
Jakob Foerster · Igor Mordatch · Angeliki Lazaridou · Kyunghyun Cho · Douwe Kiela · Pieter Abbeel -
2017 Poster: Poincaré Embeddings for Learning Hierarchical Representations »
Maximilian Nickel · Douwe Kiela -
2017 Spotlight: Poincaré Embeddings for Learning Hierarchical Representations »
Maximilian Nickel · Douwe Kiela -
2009 Workshop: Learning from Multiple Sources with Applications to Robotics »
Barbara Caputo · Nicolò Cesa-Bianchi · David R Hardoon · Gayle Leen · Francesco Orabona · Jaakko Peltonen · Simon Rogers -
2009 Poster: Who’s Doing What: Joint Modeling of Names and Verbs for Simultaneous Face and Pose Annotation »
Jie Luo · Barbara Caputo · Vittorio Ferrari