Modern learning systems, such as the recent deep learning, reinforcement learning, and probabilistic inference architectures, have become increasingly complex, often beyond the human ability to comprehend them. Such complexity is important: The more complex these systems are, the more powerful they often are. A new research problem has therefore emerged: How can the complexity, i.e. the design, components, and hyperparameters, be configured automatically so that these systems perform as well as possible? This is the problem of metalearning. Several approaches have emerged, including those based on Bayesian optimization, gradient descent, reinforcement learning, and evolutionary computation. The symposium presents an overview of these approaches, given by the researchers who developed them. Panel discussion compares the strengths of the different approaches and potential for future developments and applications. The audience will thus obtain a practical understanding of how to use metalearning to improve the learning systems they are using, as well as opportunities for research on metalearning in the future.
The Symposium schedule is available at the Symposium website metalearning-symposium.ml. Speakers will include:
- Pieter Abbeel, Embodied Intelligence and UC Berkeley
- Chrisantha Fernando, DeepMind
- Roman Garnett, Washington Univ. St. Louis
- Frank Hutter, Freiburg Univ.
- Max Jaderberg, DeepMind
- Quoc Le, Google Brain
- Risto Miikkulainen, Sentient and UT Austin
- Juergen Schmidhuber, Nnaisense and IDSIA
- Satinder Singh, Cogitai and Univ. of Michigan
- Ilya Sutskever, OpenAI
- Ken Stanley, Uber and UCF
- Oriol Vinyals, DeepMind
- Jane Wang, DeepMind