Timezone: »

 
Meta-Learning General-Purpose Learning Algorithms with Transformers
Louis Kirsch · Luke Metz · James Harrison · Jascha Sohl-Dickstein
Event URL: https://openreview.net/forum?id=0r3dY78lV-4 »

Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead aims to learn those aspects, and promises to unlock greater capabilities with less manual effort. One particularly ambitious goal of meta-learning is to train general purpose learning algorithms from scratch, using only black box models with minimal inductive bias. A general purpose learning algorithm is one which takes in training data, and produces test-set predictions across a wide range of problems, without any explicit definition of an inference model, training loss, or optimization algorithm. In this paper we show that Transformers and other black-box models can be meta-trained to act as general purpose learning algorithms, and can generalize to learn on different datasets than used during meta-training. We characterize phase transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all, induced by changes in model size, number of tasks used during meta-training, and meta-optimization hyper-parameters. We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count.

Author Information

Louis Kirsch (The Swiss AI Lab IDSIA & Google Brain)
Luke Metz (Google Brain)
James Harrison (Google)
Jascha Sohl-Dickstein (Google)

More from the Same Authors