Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Memory in Artificial and Real Intelligence (MemARI)

Meta-Learning General-Purpose Learning Algorithms with Transformers

Louis Kirsch · Luke Metz · James Harrison · Jascha Sohl-Dickstein

Keywords: [ transformers ] [ meta-optimization ] [ large-models ] [ Meta-Learning ] [ general-purpose ] [ learning-to-learn ] [ Black-box ]


Abstract:

Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead aims to learn those aspects, and promises to unlock greater capabilities with less manual effort. One particularly ambitious goal of meta-learning is to train general purpose learning algorithms from scratch, using only black box models with minimal inductive bias. A general purpose learning algorithm is one which takes in training data, and produces test-set predictions across a wide range of problems, without any explicit definition of an inference model, training loss, or optimization algorithm. In this paper we show that Transformers and other black-box models can be meta-trained to act as general purpose learning algorithms, and can generalize to learn on different datasets than used during meta-training. We characterize phase transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all, induced by changes in model size, number of tasks used during meta-training, and meta-optimization hyper-parameters. We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count.

Chat is not available.