Timezone: »

Towards Learning Universal Hyperparameter Optimizers with Transformers
Yutian Chen · Xingyou Song · Chansoo Lee · Zi Wang · Richard Zhang · David Dohan · Kazuya Kawakami · Greg Kochanski · Arnaud Doucet · Marc'Aurelio Ranzato · Sagi Perel · Nando de Freitas

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #129

Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild, such as Google’s Vizier database, one of the world’s largest HPO datasets. Our extensive experiments demonstrate that the OptFormer can simultaneously imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OptFormer also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer.

Author Information

Yutian Chen (DeepMind)
Xingyou Song (Google Brain)
Chansoo Lee (Google)
Zi Wang (Google Brain)
Richard Zhang (Google Brain)
David Dohan (Google Brain)
Kazuya Kawakami
Greg Kochanski (Google, Inc.)
Arnaud Doucet (Oxford)
Marc'Aurelio Ranzato (DeepMind)
Sagi Perel (Carnegie Mellon University)
Nando de Freitas (DeepMind)

More from the Same Authors