Timezone: »
Recent years have seen rapid progress in metalearning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Metalearning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience. The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of metalearning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in metalearning, as well as possible solutions.
Fri 9:00 a.m. - 9:10 a.m.
|
Opening Remarks
|
🔗 |
Fri 9:10 a.m. - 9:40 a.m.
|
Meta-learning as hierarchical modeling
(
Talk
)
|
Erin Grant 🔗 |
Fri 9:40 a.m. - 10:10 a.m.
|
How Meta-Learning Could Help Us Accomplish Our Grandest AI Ambitions, and Early, Exotic Steps in that Direction
(
Talk
)
A dominant trend in machine learning is that hand-designed pipelines are replaced by higher-performing learned pipelines once sufficient compute and data are available. I argue that trend will apply to machine learning itself, and thus that the fastest path to truly powerful AI is to create AI-generating algorithms (AI-GAs) that on their own learn to solve the hardest AI problems. This paradigm is an all-in bet on meta-learning. To produce AI-GAs, we need work on Three Pillars: meta-learning architectures, meta-learning learning algorithms, and automatically generating environments. In this talk I will present recent work from our team in each of the three pillars: Pillar 1: Generative Teaching Networks (GTNs); Pillar 2: Differential plasticity, differentiable neuromodulated plasticity (“backpropamine”), and a Neuromodulated Meta-Learning algorithm (ANML); Pillar 3: the Paired Open-Ended Trailblazer (POET). My goal is to motivate future research into each of the three pillars and their combination. |
Jeff Clune 🔗 |
Fri 10:10 a.m. - 10:30 a.m.
|
Poster Spotlights 1
(
Spotlight
)
|
🔗 |
Fri 10:30 a.m. - 11:30 a.m.
|
Coffee/Poster session 1
(
Poster Session
)
|
Shiro Takagi · Khurram Javed · Johanna Sommer · Amr Sharaf · Pierluca D'Oro · Ying Wei · Sivan Doveh · Colin White · Santiago Gonzalez · Cuong Nguyen · Mao Li · Tianhe Yu · Tiago Ramalho · Masahiro Nomura · Ahsan Alvi · Jean-Francois Ton · W. Ronny Huang · Jessica Lee · Sebastian Flennerhag · Michael Zhang · Abram Friesen · Paul Blomstedt · Alina Dubatovka · Sergey Bartunov · Subin Yi · Iaroslav Shcherbatyi · Christian Simon · Zeyuan Shang · David MacLeod · Lu Liu · Liam Fowl · Diego Mesquita · Deirdre Quillen
|
Fri 11:30 a.m. - 12:00 p.m.
|
Interaction of Model-based RL and Meta-RL
(
Talk
)
|
Pieter Abbeel 🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Discussion 1
(
Discussion Panel
)
|
🔗 |
Fri 2:00 p.m. - 2:30 p.m.
|
Abstraction & Meta-Reinforcement Learning
(
Talk
)
Reinforcement learning is hard in a fundamental sense: even in finite and deterministic environments, it can take a large number of samples to find a near-optimal policy. In this talk, I discuss the role that abstraction can play in achieving reliable yet efficient learning and planning. I first introduce classes of state abstraction that induce a trade-off between optimality and the size of an agent’s resulting abstract model, yielding a practical algorithm for learning useful and compact representations from a demonstrator. Moreover, I show how these learned, simple representations can underlie efficient learning in complex environments. Second, I analyze the problem of searching for options that make planning more efficient. I present new computational complexity results that illustrate it is NP-hard to find the optimal options that minimize planning time, but show this set can be approximated in polynomial time. Collectively, these results provide a partial path toward abstractions that minimize the difficulty of high quality learning and decision making. |
David Abel 🔗 |
Fri 2:30 p.m. - 3:00 p.m.
|
Scalable Meta-Learning
(
Talk
)
|
Raia Hadsell 🔗 |
Fri 3:00 p.m. - 3:20 p.m.
|
Poster Spotlights 2
(
Spotlight
)
|
🔗 |
Fri 3:20 p.m. - 4:30 p.m.
|
Coffee/Poster session 2
(
Poster Session
)
|
Xingyou Song · Puneet Mangla · David Salinas · Zhenxun Zhuang · Leo Feng · Shell Xu Hu · Raul Puri · Wesley Maddox · Aniruddh Raghu · Prudencio Tossou · Mingzhang Yin · Ishita Dasgupta · Kangwook Lee · Ferran Alet · Zhen Xu · Jörg Franke · James Harrison · Jonathan Warrell · Guneet Dhillon · Arber Zela · Xin Qiu · Julien Niklas Siems · Russell Mendonca · Louis Schlessinger · Jeffrey Li · Georgiana Manolache · Debojyoti Dutta · Lucas Glass · Abhishek Singh · Gregor Koehler
|
Fri 4:30 p.m. - 4:45 p.m.
|
Contributed Talk 1: Meta-Learning with Warped Gradient Descent (Sebastian Flennerhag)
(
Talk
)
|
🔗 |
Fri 4:45 p.m. - 5:00 p.m.
|
Contributed Talk 2: MetaPix: Few-shot video retargeting (Jessica Lee)
(
Talk
)
|
🔗 |
Fri 5:00 p.m. - 5:30 p.m.
|
Compositional generalization in minds and machines
(
Talk
)
People learn in fast and flexible ways that elude the best artificial neural networks. Once a person learns how to “dax,” they can effortlessly understand how to “dax twice” or “dax vigorously” thanks to their compositional skills. In this talk, we examine how people and machines generalize compositionally in language-like instruction learning tasks. Artificial neural networks have long been criticized for lacking systematic compositionality (Fodor & Pylshyn, 1988; Marcus, 1998), but new architectures have been tackling increasingly ambitious language tasks. In light of these developments, we reevaluate these classic criticisms and find that artificial neural nets still fail spectacularly when systematic compositionality is required. We then show how people succeed in similar few-shot learning tasks and find they utilize three inductive biases that can be incorporated into models. Finally, we show how more structured neural nets can acquire compositional skills and human-like inductive biases through meta-learning. |
Brenden Lake 🔗 |
Fri 5:30 p.m. - 5:50 p.m.
|
Discussion 2
(
Discussion Panel
)
|
🔗 |
Author Information
Roberto Calandra (Facebook AI Research)
Ignasi Clavera Gilaberte (UC Berkeley)
Frank Hutter (University of Freiburg & Bosch)
Frank Hutter is a Full Professor for Machine Learning at the Computer Science Department of the University of Freiburg (Germany), where he previously was an assistant professor 2013-2017. Before that, he was at the University of British Columbia (UBC) for eight years, for his PhD and postdoc. Frank's main research interests lie in machine learning, artificial intelligence and automated algorithm design. For his 2009 PhD thesis on algorithm configuration, he received the CAIAC doctoral dissertation award for the best thesis in AI in Canada that year, and with his coauthors, he received several best paper awards and prizes in international competitions on machine learning, SAT solving, and AI planning. Since 2016 he holds an ERC Starting Grant for a project on automating deep learning based on Bayesian optimization, Bayesian neural networks, and deep reinforcement learning.
Joaquin Vanschoren (Eindhoven University of Technology, OpenML)

Joaquin Vanschoren is an Assistant Professor in Machine Learning at the Eindhoven University of Technology. He holds a PhD from the Katholieke Universiteit Leuven, Belgium. His research focuses on meta-learning and understanding and automating machine learning. He founded and leads OpenML.org, a popular open science platform that facilitates the sharing and reuse of reproducible empirical machine learning data. He obtained several demo and application awards and has been invited speaker at ECDA, StatComp, IDA, AutoML@ICML, CiML@NIPS, AutoML@PRICAI, MLOSS@NIPS, and many other occasions, as well as tutorial speaker at NIPS and ECMLPKDD. He was general chair at LION 2016, program chair of Discovery Science 2018, demo chair at ECMLPKDD 2013, and co-organizes the AutoML and meta-learning workshop series at NIPS 2018, ICML 2016-2018, ECMLPKDD 2012-2015, and ECAI 2012-2014. He is also editor and contributor to the book 'Automatic Machine Learning: Methods, Systems, Challenges'.
Jane Wang (DeepMind)
Jane Wang is a research scientist at DeepMind on the neuroscience team, working on meta-reinforcement learning and neuroscience-inspired artificial agents. Her background is in physics, complex systems, and computational and cognitive neuroscience.
More from the Same Authors
-
2021 : OpenML Benchmarking Suites »
Bernd Bischl · Giuseppe Casalicchio · Matthias Feurer · Pieter Gijsbers · Frank Hutter · Michel Lang · Rafael Gomes Mantovani · Jan van Rijn · Joaquin Vanschoren -
2021 : HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO »
Katharina Eggensperger · Philipp Müller · Neeratyoy Mallik · Matthias Feurer · Rene Sass · Aaron Klein · Noor Awad · Marius Lindauer · Frank Hutter -
2021 : Alchemy: A benchmark and analysis toolkit for meta-reinforcement learning agents »
Jane Wang · Michael King · Nicolas Porcel · Zeb Kurth-Nelson · Tina Zhu · Charles Deck · Peter Choy · Mary Cassin · Malcolm Reynolds · Francis Song · Gavin Buttimore · David Reichert · Neil Rabinowitz · Loic Matthey · Demis Hassabis · Alexander Lerchner · Matt Botvinick -
2021 : Variational Task Encoders for Model-Agnostic Meta-Learning »
Joaquin Vanschoren -
2021 : Open-Ended Learning Strategies for Learning Complex Locomotion Skills »
Joaquin Vanschoren -
2021 : Transformers Can Do Bayesian-Inference By Meta-Learning on Prior-Data »
Samuel Müller · Noah Hollmann · Sebastian Pineda Arango · Josif Grabocka · Frank Hutter -
2021 : Continual with Sujeeth Bharadwaj, Gabriel Silva, Eric Traut, Jane Wang »
Sujeeth Bharadwaj · Jane Wang · Weiwei Yang -
2022 : c-TPE: Generalizing Tree-structured Parzen Estimator with Inequality Constraints for Continuous and Categorical Hyperparameter Optimization »
Shuhei Watanabe · Frank Hutter -
2022 : TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second »
Noah Hollmann · Samuel Müller · Katharina Eggensperger · Frank Hutter -
2022 : On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition »
Samuel Dooley · Rhea Sukthanker · John Dickerson · Colin White · Frank Hutter · Micah Goldblum -
2022 : Fifteen-minute Competition Overview Video »
Dustin Carrión-Ojeda · Ihsan Ullah · Sergio Escalera · Isabelle Guyon · Felix Mohr · Manh Hung Nguyen · Joaquin Vanschoren -
2022 : LOTUS: Learning to learn with Optimal Transport in Unsupervised Scenarios »
prabhant singh · Joaquin Vanschoren -
2022 : Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted Networks »
Steven Adriaensen · Herilalaina Rakotoarison · Samuel Müller · Frank Hutter -
2022 : Transfer NAS with Meta-learned Bayesian Surrogates »
Gresa Shala · Thomas Elsken · Frank Hutter · Josif Grabocka -
2022 : Gray-Box Gaussian Processes for Automated Reinforcement Learning »
Gresa Shala · André Biedenkapp · Frank Hutter · Josif Grabocka -
2022 : AutoRL-Bench 1.0 »
Gresa Shala · Sebastian Pineda Arango · André Biedenkapp · Frank Hutter · Josif Grabocka -
2022 : Bayesian Optimization with a Neural Network Meta-learned on Synthetic Data Only »
Samuel Müller · Sebastian Pineda Arango · Matthias Feurer · Josif Grabocka · Frank Hutter -
2022 : GraViT-E: Gradient-based Vision Transformer Search with Entangled Weights »
Rhea Sukthanker · Arjun Krishnakumar · sharat patil · Frank Hutter -
2022 : PriorBand: HyperBand + Human Expert Knowledge »
Neeratyoy Mallik · Carl Hvarfner · Danny Stoll · Maciej Janowski · Edward Bergman · Marius Lindauer · Luigi Nardi · Frank Hutter -
2022 : Towards Discovering Neural Architectures from Scratch »
Simon Schrodi · Danny Stoll · Robin Ru · Rhea Sukthanker · Thomas Brox · Frank Hutter -
2022 : On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition »
Samuel Dooley · Rhea Sukthanker · John Dickerson · Colin White · Frank Hutter · Micah Goldblum -
2022 : Multi-objective Tree-structured Parzen Estimator Meets Meta-learning »
Shuhei Watanabe · Noor Awad · Masaki Onishi · Frank Hutter -
2022 : Towards better benchmarks for AutoML, meta-learning and continual learning in computer vision »
Joaquin Vanschoren -
2022 Competition: Cross-Domain MetaDL: Any-Way Any-Shot Learning Competition with Novel Datasets from Practical Domains »
Dustin Carrión-Ojeda · Ihsan Ullah · Sergio Escalera · Isabelle Guyon · Felix Mohr · Manh Hung Nguyen · Joaquin Vanschoren -
2022 : TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second »
Noah Hollmann · Samuel Müller · Katharina Eggensperger · Frank Hutter -
2022 Workshop: NeurIPS 2022 Workshop on Meta-Learning »
Huaxiu Yao · Eleni Triantafillou · Fabio Ferreira · Joaquin Vanschoren · Qi Lei -
2022 Poster: Meta-Album: Multi-domain Meta-Dataset for Few-Shot Image Classification »
Ihsan Ullah · Dustin Carrión-Ojeda · Sergio Escalera · Isabelle Guyon · Mike Huisman · Felix Mohr · Jan N. van Rijn · Haozhe Sun · Joaquin Vanschoren · Phan Anh Vu -
2022 Poster: Joint Entropy Search For Maximally-Informed Bayesian Optimization »
Carl Hvarfner · Frank Hutter · Luigi Nardi -
2022 Poster: Probabilistic Transformer: Modelling Ambiguities and Distributions for RNA Folding and Molecule Design »
Jörg Franke · Frederic Runge · Frank Hutter -
2022 Poster: Data Distributional Properties Drive Emergent In-Context Learning in Transformers »
Stephanie Chan · Adam Santoro · Andrew Lampinen · Jane Wang · Aaditya Singh · Pierre Richemond · James McClelland · Felix Hill -
2022 Poster: NAS-Bench-Suite-Zero: Accelerating Research on Zero Cost Proxies »
Arjun Krishnakumar · Colin White · Arber Zela · Renbo Tu · Mahmoud Safari · Frank Hutter -
2022 Poster: Semantic Exploration from Language Abstractions and Pretrained Representations »
Allison Tam · Neil Rabinowitz · Andrew Lampinen · Nicholas Roy · Stephanie Chan · DJ Strouse · Jane Wang · Andrea Banino · Felix Hill -
2022 Poster: JAHS-Bench-201: A Foundation For Research On Joint Architecture And Hyperparameter Search »
Archit Bansal · Danny Stoll · Maciej Janowski · Arber Zela · Frank Hutter -
2021 : Live Q&A Session 2 with Susan Athey, Yoshua Bengio, Sujeeth Bharadwaj, Jane Wang, Joshua Vogelstein, Weiwei Yang »
Susan Athey · Yoshua Bengio · Sujeeth Bharadwaj · Jane Wang · Weiwei Yang · Joshua T Vogelstein -
2021 : CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning »
Carolin Benjamins · Theresa Eimer · Frederik Schubert · André Biedenkapp · Bodo Rosenhahn · Frank Hutter · Marius Lindauer -
2021 Workshop: Data Centric AI »
Andrew Ng · Lora Aroyo · Greg Diamos · Cody Coleman · Vijay Janapa Reddi · Joaquin Vanschoren · Carole-Jean Wu · Sharon Zhou · Lynn He -
2021 Workshop: 5th Workshop on Meta-Learning »
Erin Grant · Fábio Ferreira · Frank Hutter · Jonathan Richard Schwarz · Joaquin Vanschoren · Huaxiu Yao -
2021 Poster: How Powerful are Performance Predictors in Neural Architecture Search? »
Colin White · Arber Zela · Robin Ru · Yang Liu · Frank Hutter -
2021 Datasets and Benchmarks: Dataset and Benchmark Poster Session 4 »
Joaquin Vanschoren · Serena Yeung -
2021 Datasets and Benchmarks: Dataset and Benchmark Track 3 »
Joaquin Vanschoren · Serena Yeung -
2021 Datasets and Benchmarks: Dataset and Benchmark Symposium »
Joaquin Vanschoren · Serena Yeung -
2021 Datasets and Benchmarks: Dataset and Benchmark Poster Session 3 »
Joaquin Vanschoren · Serena Yeung -
2021 Poster: Well-tuned Simple Nets Excel on Tabular Datasets »
Arlind Kadra · Marius Lindauer · Frank Hutter · Josif Grabocka -
2021 Poster: NAS-Bench-x11 and the Power of Learning Curves »
Shen Yan · Colin White · Yash Savani · Frank Hutter -
2021 Poster: Active 3D Shape Reconstruction from Vision and Touch »
Edward Smith · David Meger · Luis Pineda · Roberto Calandra · Jitendra Malik · Adriana Romero Soriano · Michal Drozdzal -
2021 Datasets and Benchmarks: Dataset and Benchmark Track 2 »
Joaquin Vanschoren · Serena Yeung -
2021 Panel: The Role of Benchmarks in the Scientific Progress of Machine Learning »
Lora Aroyo · Samuel Bowman · Isabelle Guyon · Joaquin Vanschoren -
2021 : MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains + Q&A »
Adrian El Baz · Isabelle Guyon · Zhengying Liu · Jan N. Van Rijn · Haozhe Sun · Sébastien Treguer · Wei-Wei Tu · Ihsan Ullah · Joaquin Vanschoren · Phan Ahn Vu -
2021 Datasets and Benchmarks: Dataset and Benchmark Poster Session 2 »
Joaquin Vanschoren · Serena Yeung -
2021 Poster: Neural Ensemble Search for Uncertainty Estimation and Dataset Shift »
Sheheryar Zaidi · Arber Zela · Thomas Elsken · Chris C Holmes · Frank Hutter · Yee Teh -
2021 Datasets and Benchmarks: Dataset and Benchmark Poster Session 1 »
Joaquin Vanschoren · Serena Yeung -
2021 Datasets and Benchmarks: Dataset and Benchmark Track 1 »
Joaquin Vanschoren · Serena Yeung -
2020 : Discussion Panel »
Pete Florence · Dorsa Sadigh · Carolina Parada · Jeannette Bohg · Roberto Calandra · Peter Stone · Fabio Ramos -
2020 : Introduction for invited speaker, Louis Kirsch »
Joaquin Vanschoren -
2020 Workshop: 3rd Robot Learning Workshop »
Masha Itkina · Alex Bewley · Roberto Calandra · Igor Gilitschenski · Julien PEREZ · Ransalu Senanayake · Markus Wulfmeier · Vincent Vanhoucke -
2020 : Q/A for invited talk #1 »
Frank Hutter -
2020 : Meta-learning neural architectures, initial weights, hyperparameters, and algorithm components »
Frank Hutter -
2020 : Introduction for invited speaker, Frank Hutter »
Jane Wang -
2020 Workshop: Meta-Learning »
Jane Wang · Joaquin Vanschoren · Erin Grant · Jonathan Richard Schwarz · Francesco Visin · Jeff Clune · Roberto Calandra -
2020 Poster: Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization »
Ben Letham · Roberto Calandra · Akshara Rai · Eytan Bakshy -
2020 Poster: Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning »
Younggyo Seo · Kimin Lee · Ignasi Clavera Gilaberte · Thanard Kurutach · Jinwoo Shin · Pieter Abbeel -
2020 Poster: 3D Shape Reconstruction from Vision and Touch »
Edward Smith · Roberto Calandra · Adriana Romero · Georgia Gkioxari · David Meger · Jitendra Malik · Michal Drozdzal -
2020 Tutorial: (Track1) Where Neuroscience meets AI (And What’s in Store for the Future) »
Jane Wang · Kevin Miller · Adam Marblestone -
2019 : Poster Session »
Matthia Sabatelli · Adam Stooke · Amir Abdi · Paulo Rauber · Leonard Adolphs · Ian Osband · Hardik Meisheri · Karol Kurach · Johannes Ackermann · Matt Benatan · GUO ZHANG · Chen Tessler · Dinghan Shen · Mikayel Samvelyan · Riashat Islam · Murtaza Dalal · Luke Harries · Andrey Kurenkov · Konrad Żołna · Sudeep Dasari · Kristian Hartikainen · Ofir Nachum · Kimin Lee · Markus Holzleitner · Vu Nguyen · Francis Song · Christopher Grimm · Felipe Leno da Silva · Yuping Luo · Yifan Wu · Alex Lee · Thomas Paine · Wei-Yang Qu · Daniel Graves · Yannis Flet-Berliac · Yunhao Tang · Suraj Nair · Matthew Hausknecht · Akhil Bagaria · Simon Schmitt · Bowen Baker · Paavo Parmas · Benjamin Eysenbach · Lisa Lee · Siyu Lin · Daniel Seita · Abhishek Gupta · Riley Simmons-Edler · Yijie Guo · Kevin Corder · Vikash Kumar · Scott Fujimoto · Adam Lerer · Ignasi Clavera Gilaberte · Nicholas Rhinehart · Ashvin Nair · Ge Yang · Lingxiao Wang · Sungryull Sohn · J. Fernando Hernandez-Garcia · Xian Yeow Lee · Rupesh Srivastava · Khimya Khetarpal · Chenjun Xiao · Luckeciano Carvalho Melo · Rishabh Agarwal · Tianhe Yu · Glen Berseth · Devendra Singh Chaplot · Jie Tang · Anirudh Srinivasan · Tharun Kumar Reddy Medini · Aaron Havens · Misha Laskin · Asier Mujika · Rohan Saphal · Joseph Marino · Alex Ray · Joshua Achiam · Ajay Mandlekar · Zhuang Liu · Danijar Hafner · Zhiwen Tang · Ted Xiao · Michael Walton · Jeff Druce · Ferran Alet · Zhang-Wei Hong · Stephanie Chan · Anusha Nagabandi · Hao Liu · Hao Sun · Ge Liu · Dinesh Jayaraman · John Co-Reyes · Sophia Sanborn -
2019 Workshop: Robot Learning: Control and Interaction in the Real World »
Roberto Calandra · Markus Wulfmeier · Kate Rakelly · Sanket Kamthe · Danica Kragic · Stefan Schaal · Markus Wulfmeier -
2019 : Panel Discussion led by Grace Lindsay »
Grace Lindsay · Blake Richards · Doina Precup · Jacqueline Gottlieb · Jeff Clune · Jane Wang · Richard Sutton · Angela Yu · Ida Momennejad -
2019 : Frank Hutter (University of Freiburg) "A Proposal for a New Competition Design Emphasizing Scientific Insights" »
Frank Hutter -
2019 : Invited Talk #1: From brains to agents and back »
Jane Wang -
2019 Poster: Meta-Surrogate Benchmarking for Hyperparameter Optimization »
Aaron Klein · Zhenwen Dai · Frank Hutter · Neil Lawrence · Javier González -
2018 : ProMP: Proximal Meta-Policy Search »
Ignasi Clavera Gilaberte -
2018 Workshop: NIPS 2018 Workshop on Meta-Learning »
Joaquin Vanschoren · Frank Hutter · Sachin Ravi · Jane Wang · Erin Grant -
2018 Poster: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Spotlight: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Poster: Maximizing acquisition functions for Bayesian optimization »
James Wilson · Frank Hutter · Marc Deisenroth -
2018 Tutorial: Automatic Machine Learning »
Frank Hutter · Joaquin Vanschoren -
2017 : Introduction and opening remarks »
Roberto Calandra -
2017 Workshop: Workshop on Meta-Learning »
Roberto Calandra · Frank Hutter · Hugo Larochelle · Sergey Levine -
2016 : Invited talk, Frank Hutter »
Frank Hutter -
2016 Workshop: Bayesian Optimization: Black-box Optimization and Beyond »
Roberto Calandra · Bobak Shahriari · Javier Gonzalez · Frank Hutter · Ryan Adams -
2016 : Frank Hutter (University Freiburg) »
Frank Hutter -
2016 : OpenML in research and education »
Joaquin Vanschoren -
2016 Poster: Bayesian Optimization with Robust Bayesian Neural Networks »
Jost Tobias Springenberg · Aaron Klein · Stefan Falkner · Frank Hutter -
2016 Oral: Bayesian Optimization with Robust Bayesian Neural Networks »
Jost Tobias Springenberg · Aaron Klein · Stefan Falkner · Frank Hutter -
2015 : Scalable and Flexible Bayesian Optimization for Algorithm Configuration »
Frank Hutter -
2015 Workshop: Bayesian Optimization: Scalability and Flexibility »
Bobak Shahriari · Ryan Adams · Nando de Freitas · Amar Shah · Roberto Calandra -
2015 Poster: Efficient and Robust Automated Machine Learning »
Matthias Feurer · Aaron Klein · Katharina Eggensperger · Jost Springenberg · Manuel Blum · Frank Hutter