Timezone: »

Jane Wang · Joaquin Vanschoren · Erin Grant · Jonathan Richard Schwarz · Francesco Visin · Jeff Clune · Roberto Calandra

Fri Dec 11 03:00 AM -- 12:00 PM (PST) @
Event URL: https://meta-learn.github.io/2020/ »

How to join the virtual workshop: The 2020 Workshop on Meta-Learning will be a series of streamed pre-recorded talks + live question-and-answer (Q&A) periods, and poster sessions on Gather.Town. You can participate by:
* Accessing the livestream on our NeurIPS.cc virtual workshop page - likely this page!
* Asking questions to the speakers and panelists on Sli.do, on the MetaLearn 2020 website
* Joining the Zoom to message questions to the moderator during the panel discussion, also from the NeurIPS.cc virtual workshop page.
* Joining the poster sessions on Gather.Town (you can find the list of papers (and their virtual placement) for each session on the MetaLearn 2020 website:
* Session 1;
* Session 2;
* Session 3.
* Chatting with us and other participants on the MetaLearn 2020 Rocket.Chat!
* Entering panel discussion questions in this sli.do!

Focus of the workshop: Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to learn new tasks more efficiently, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers and policies over hand-crafted features, to learning representations over which classifiers and policies operate, and finally to learning algorithms that themselves acquire representations, classifiers, and policies. Meta-learning methods are also of substantial practical interest. For instance, they have been shown to yield new state-of-the-art automated machine learning algorithms and architectures, and have substantially improved few-shot learning systems. Moreover, the ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in cognitive science and reward learning in neuroscience.

Author Information

Jane Wang (DeepMind)

Jane Wang is a research scientist at DeepMind on the neuroscience team, working on meta-reinforcement learning and neuroscience-inspired artificial agents. Her background is in physics, complex systems, and computational and cognitive neuroscience.

Joaquin Vanschoren (Eindhoven University of Technology, OpenML)
Joaquin Vanschoren

Joaquin Vanschoren is an Assistant Professor in Machine Learning at the Eindhoven University of Technology. He holds a PhD from the Katholieke Universiteit Leuven, Belgium. His research focuses on meta-learning and understanding and automating machine learning. He founded and leads OpenML.org, a popular open science platform that facilitates the sharing and reuse of reproducible empirical machine learning data. He obtained several demo and application awards and has been invited speaker at ECDA, StatComp, IDA, AutoML@ICML, CiML@NIPS, AutoML@PRICAI, MLOSS@NIPS, and many other occasions, as well as tutorial speaker at NIPS and ECMLPKDD. He was general chair at LION 2016, program chair of Discovery Science 2018, demo chair at ECMLPKDD 2013, and co-organizes the AutoML and meta-learning workshop series at NIPS 2018, ICML 2016-2018, ECMLPKDD 2012-2015, and ECAI 2012-2014. He is also editor and contributor to the book 'Automatic Machine Learning: Methods, Systems, Challenges'.

Erin Grant (UC Berkeley)
Jonathan Richard Schwarz (DeepMind & Gatsby Unit, UCL)
Francesco Visin (DeepMind)
Jeff Clune (OpenAI)
Roberto Calandra (Facebook AI Research)

More from the Same Authors