Timezone: »
Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization, such as hyperparameter optimization. Typically, BO relies on conventional Gaussian process (GP) regression, whose algorithmic complexity is cubic in the number of evaluations. As a result, GP-based BO cannot leverage large numbers of past function evaluations, for example, to warm-start related BO runs. We propose a multi-task adaptive Bayesian linear regression model for transfer learning in BO, whose complexity is linear in the function evaluations: one Bayesian linear regression model is associated to each black-box function optimization problem (or task), while transfer learning is achieved by coupling the models through a shared deep neural net. Experiments show that the neural net learns a representation suitable for warm-starting the black-box optimization problems and that BO runs can be accelerated when the target black-box function (e.g., validation loss) is learned together with other related signals (e.g., training loss). The proposed method was found to be at least one order of magnitude faster that methods recently published in the literature.
Author Information
Valerio Perrone (University of Warwick)
Rodolphe Jenatton (Amazon Research)
Matthias W Seeger (Amazon Development Center)
Cedric Archambeau (Amazon)
More from the Same Authors
-
2021 : Gradient-matching coresets for continual learning »
Lukas Balles · Giovanni Zappella · Cedric Archambeau -
2022 Poster: On the Adversarial Robustness of Mixture of Experts »
Joan Puigcerver · Rodolphe Jenatton · Carlos Riquelme · Pranjal Awasthi · Srinadh Bhojanapalli -
2022 Poster: Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts »
Basil Mustafa · Carlos Riquelme · Joan Puigcerver · Rodolphe Jenatton · Neil Houlsby -
2020 : Bayesian optimization by density ratio estimation »
Louis Tiao · Aaron Klein · Cedric Archambeau · Edwin Bonilla · Matthias W Seeger · Fabio Ramos -
2019 Poster: Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning »
Valerio Perrone · Huibin Shen · Matthias Seeger · Cedric Archambeau · Rodolphe Jenatton -
2018 Poster: Deep State Space Models for Time Series Forecasting »
Syama Sundar Rangapuram · Matthias W Seeger · Jan Gasthaus · Lorenzo Stella · Bernie Wang · Tim Januschowski -
2018 Poster: A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks »
Jeffrey Chan · Valerio Perrone · Jeffrey Spence · Paul Jenkins · Sara Mathieson · Yun Song -
2018 Spotlight: A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks »
Jeffrey Chan · Valerio Perrone · Jeffrey Spence · Paul Jenkins · Sara Mathieson · Yun Song -
2016 Poster: Bayesian Intermittent Demand Forecasting for Large Inventories »
Matthias W Seeger · David Salinas · Valentin Flunkert -
2016 Oral: Bayesian Intermittent Demand Forecasting for Large Inventories »
Matthias W Seeger · David Salinas · Valentin Flunkert -
2013 Poster: Convex Relaxations for Permutation Problems »
Fajwel Fogel · Rodolphe Jenatton · Francis Bach · Alexandre d'Aspremont