Timezone: »

 
Workshop
Meta-Learning
Roberto Calandra · Ignasi Clavera Gilaberte · Frank Hutter · Joaquin Vanschoren · Jane Wang

Fri Dec 13 08:00 AM -- 06:00 PM (PST) @ West Ballroom B
Event URL: http://metalearning.ml/ »

Recent years have seen rapid progress in meta­learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta­learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience. The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta­learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta­learning, as well as possible solutions.

Fri 9:00 a.m. - 9:10 a.m. [iCal]
Opening Remarks
Fri 9:10 a.m. - 9:40 a.m. [iCal]
Meta-learning as hierarchical modeling (Talk)
Erin Grant
Fri 9:40 a.m. - 10:10 a.m. [iCal]

A dominant trend in machine learning is that hand-designed pipelines are replaced by higher-performing learned pipelines once sufficient compute and data are available. I argue that trend will apply to machine learning itself, and thus that the fastest path to truly powerful AI is to create AI-generating algorithms (AI-GAs) that on their own learn to solve the hardest AI problems. This paradigm is an all-in bet on meta-learning. To produce AI-GAs, we need work on Three Pillars: meta-learning architectures, meta-learning learning algorithms, and automatically generating environments. In this talk I will present recent work from our team in each of the three pillars: Pillar 1: Generative Teaching Networks (GTNs); Pillar 2: Differential plasticity, differentiable neuromodulated plasticity (“backpropamine”), and a Neuromodulated Meta-Learning algorithm (ANML); Pillar 3: the Paired Open-Ended Trailblazer (POET). My goal is to motivate future research into each of the three pillars and their combination.

Jeff Clune
Fri 10:10 a.m. - 10:30 a.m. [iCal]
Poster Spotlights 1 (Spotlight)
Fri 10:30 a.m. - 11:30 a.m. [iCal]
Coffee/Poster session 1 (Poster Session)
Shiro Takagi, Khurram Javed, Johanna Sommer, Amr Sharaf, Pierluca D'Oro, Ying Wei, Sivan Doveh, Colin White, Santiago Gonzalez, Cuong Nguyen, mao li, Tianhe (Kevin) Yu, Tiago Ramalho, Masahiro Nomura, Ahsan Alvi, Jean-Francois Ton, Ronny Huang, Jessica Lee, Sebastian Flennerhag, Michael Zhang, Abe Friesen, Paul Blomstedt, Alina Dubatovka, Sergey Bartunov, Subin Yi, Iaroslav Shcherbatyi, Christian Simon, Zeyuan Shang, David MacLeod, Lu Liu, Liam Fowl, Diego Mesquita, Deirdre Quillen
Fri 11:30 a.m. - 12:00 p.m. [iCal]
Interaction of Model-based RL and Meta-RL (Talk)
Pieter Abbeel
Fri 12:00 p.m. - 12:30 p.m. [iCal]
Discussion 1 (Discussion Panel)
Fri 2:00 p.m. - 2:30 p.m. [iCal]

Reinforcement learning is hard in a fundamental sense: even in finite and deterministic environments, it can take a large number of samples to find a near-optimal policy. In this talk, I discuss the role that abstraction can play in achieving reliable yet efficient learning and planning. I first introduce classes of state abstraction that induce a trade-off between optimality and the size of an agent’s resulting abstract model, yielding a practical algorithm for learning useful and compact representations from a demonstrator. Moreover, I show how these learned, simple representations can underlie efficient learning in complex environments. Second, I analyze the problem of searching for options that make planning more efficient. I present new computational complexity results that illustrate it is NP-hard to find the optimal options that minimize planning time, but show this set can be approximated in polynomial time. Collectively, these results provide a partial path toward abstractions that minimize the difficulty of high quality learning and decision making.

Dave Abel
Fri 2:30 p.m. - 3:00 p.m. [iCal]
Scalable Meta-Learning (Talk)
Raia Hadsell
Fri 3:00 p.m. - 3:20 p.m. [iCal]
Poster Spotlights 2 (Spotlight)
Fri 3:20 p.m. - 4:30 p.m. [iCal]
Coffee/Poster session 2 (Poster Session)
Xingyou Song, Puneet Mangla, David Salinas, Zhenxun Zhuang, Leo Feng, Shell Xu Hu, Raul Puri, Wesley J Maddox, Aniruddh Raghu, Prudencio Tossou, Mingzhang Yin, Ishita Dasgupta, Kangwook Lee, Ferran Alet, Zhen Xu, Jörg KH Franke, James Harrison, Jonathan Warrell, Guneet S Dhillon, Arber Zela, Xin Qiu, Julien Niklas Siems, Russell Mendonca, Louis Schlessinger, Jeffrey Li, Georgiana Manolache, Debo Dutta, Lucas Glass, Abhishek Singh, Gregor Koehler
Fri 4:30 p.m. - 4:45 p.m. [iCal]
Contributed Talk 1: Meta-Learning with Warped Gradient Descent (Sebastian Flennerhag) (Talk)
Fri 4:45 p.m. - 5:00 p.m. [iCal]
Contributed Talk 2: MetaPix: Few-shot video retargeting (Jessica Lee) (Talk)
Fri 5:00 p.m. - 5:30 p.m. [iCal]

People learn in fast and flexible ways that elude the best artificial neural networks. Once a person learns how to “dax,” they can effortlessly understand how to “dax twice” or “dax vigorously” thanks to their compositional skills. In this talk, we examine how people and machines generalize compositionally in language-like instruction learning tasks. Artificial neural networks have long been criticized for lacking systematic compositionality (Fodor & Pylshyn, 1988; Marcus, 1998), but new architectures have been tackling increasingly ambitious language tasks. In light of these developments, we reevaluate these classic criticisms and find that artificial neural nets still fail spectacularly when systematic compositionality is required. We then show how people succeed in similar few-shot learning tasks and find they utilize three inductive biases that can be incorporated into models. Finally, we show how more structured neural nets can acquire compositional skills and human-like inductive biases through meta-learning.

Brenden Lake
Fri 5:30 p.m. - 5:50 p.m. [iCal]
Discussion 2 (Discussion Panel)

Author Information

Roberto Calandra (Facebook AI Research)
Ignasi Clavera Gilaberte (UC Berkeley)
Frank Hutter (University of Freiburg & Bosch)

Frank Hutter is a Full Professor for Machine Learning at the Computer Science Department of the University of Freiburg (Germany), where he previously was an assistant professor 2013-2017. Before that, he was at the University of British Columbia (UBC) for eight years, for his PhD and postdoc. Frank's main research interests lie in machine learning, artificial intelligence and automated algorithm design. For his 2009 PhD thesis on algorithm configuration, he received the CAIAC doctoral dissertation award for the best thesis in AI in Canada that year, and with his coauthors, he received several best paper awards and prizes in international competitions on machine learning, SAT solving, and AI planning. Since 2016 he holds an ERC Starting Grant for a project on automating deep learning based on Bayesian optimization, Bayesian neural networks, and deep reinforcement learning.

Joaquin Vanschoren (Eindhoven University of Technology, OpenML)

Joaquin Vanschoren is an Assistant Professor in Machine Learning at the Eindhoven University of Technology. He holds a PhD from the Katholieke Universiteit Leuven, Belgium. His research focuses on meta-learning and understanding and automating machine learning. He founded and leads OpenML.org, a popular open science platform that facilitates the sharing and reuse of reproducible empirical machine learning data. He obtained several demo and application awards and has been invited speaker at ECDA, StatComp, IDA, AutoML@ICML, CiML@NIPS, AutoML@PRICAI, MLOSS@NIPS, and many other occasions, as well as tutorial speaker at NIPS and ECMLPKDD. He was general chair at LION 2016, program chair of Discovery Science 2018, demo chair at ECMLPKDD 2013, and co-organizes the AutoML and meta-learning workshop series at NIPS 2018, ICML 2016-2018, ECMLPKDD 2012-2015, and ECAI 2012-2014. He is also editor and contributor to the book 'Automatic Machine Learning: Methods, Systems, Challenges'.

Jane Wang (DeepMind)

More from the Same Authors