Program Highlights »
Workshop
Sat Dec 9th 08:00 AM -- 06:30 PM @ Hyatt Beacon Ballroom D+E+F+H
Workshop on Meta-Learning
Roberto Calandra · Frank Hutter · Hugo Larochelle · Sergey Levine





Workshop Home Page

Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.

Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.

Some of the fundamental questions that this workshop aims to address are:
- What are the fundamental differences in the learning “task” compared to traditional “non-meta” learners?
- Is there a practical limit to the number of meta-learning layers (e.g., would a meta-meta-meta-learning algorithm be of practical use)?
- How can we design more sample-efficient meta-learning methods?
- How can we exploit our domain knowledge to effectively guide the meta-learning process?
- What are the meta-learning processes in nature (e.g, in humans), and how can we take inspiration from them?
- Which ML approaches are best suited for meta-learning, in which circumstances, and why?
- What principles can we learn from meta-learning to help us design the next generation of learning systems?

The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.

In terms of prospective participants, our main targets are machine learning researchers interested in the processes related to understanding and improving current meta-learning algorithms. Specific target communities within machine learning include, but are not limited to: meta-learning, optimization, deep learning, reinforcement learning, evolutionary computation, Bayesian optimization and AutoML. Our invited speakers also include researchers who study human learning, to provide a broad perspective to the attendees.

08:30 AM Introduction and opening remarks (Introduction)
Roberto Calandra
08:40 AM Learning to optimize with reinforcement learning (Talk)
Jitendra Malik
09:10 AM Informing the Use of Hyperparameter Optimization Through Metalearning (Talk)
Christophe Giraud-Carrier
09:40 AM Poster Spotlight (Spotlight)
10:00 AM Poster session (and Coffee Break) (Poster Session)
Jacob Andreas, Kun Li, Conner Vercellino, Thomas Miconi, Wenpeng Zhang, Luca Franceschi, Zheng Xiong, Karim Ahmed, Laurent Itti, Tim Klinger, Mostafa Rohaninejad
11:00 AM Invited talk: Jane Wang (Talk)
11:30 AM Model-Agnostic Meta-Learning: Universality, Inductive Bias, and Weak Supervision (Talk)
Chelsea Finn
01:30 PM Learn to learn high-dimensional models from few examples (Talk)
Josh Tenenbaum
02:00 PM Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start (Contributed Talk)
02:15 PM Learning to Model the Tail (Contributed Talk)
02:30 PM Poster session (and Coffee Break) (Poster Session)
03:30 PM Meta Unsupervised Learning (Talk)
Oriol Vinyals
04:00 PM Panel Discussion