Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ Hyatt Beacon Ballroom D+E+F+H
Workshop on Meta-Learning
Roberto Calandra · Frank Hutter · Hugo Larochelle · Sergey Levine





Workshop Home Page

Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.

Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.

Some of the fundamental questions that this workshop aims to address are:
- What are the fundamental differences in the learning “task” compared to traditional “non-meta” learners?
- Is there a practical limit to the number of meta-learning layers (e.g., would a meta-meta-meta-learning algorithm be of practical use)?
- How can we design more sample-efficient meta-learning methods?
- How can we exploit our domain knowledge to effectively guide the meta-learning process?
- What are the meta-learning processes in nature (e.g, in humans), and how can we take inspiration from them?
- Which ML approaches are best suited for meta-learning, in which circumstances, and why?
- What principles can we learn from meta-learning to help us design the next generation of learning systems?

The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.

In terms of prospective participants, our main targets are machine learning researchers interested in the processes related to understanding and improving current meta-learning algorithms. Specific target communities within machine learning include, but are not limited to: meta-learning, optimization, deep learning, reinforcement learning, evolutionary computation, Bayesian optimization and AutoML. Our invited speakers also include researchers who study human learning, to provide a broad perspective to the attendees.

Introduction and opening remarks (Introduction)
Learning to optimize with reinforcement learning (Talk)
Informing the Use of Hyperparameter Optimization Through Metalearning (Talk)
Poster Spotlight (Spotlight)
Poster session (and Coffee Break) (Poster Session)
Invited talk: Jane Wang (Talk)
Model-Agnostic Meta-Learning: Universality, Inductive Bias, and Weak Supervision (Talk)
Learn to learn high-dimensional models from few examples (Talk)
Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start (Contributed Talk)
Learning to Model the Tail (Contributed Talk)
Poster session (and Coffee Break) (Poster Session)
Meta Unsupervised Learning (Talk)
Panel Discussion