Timezone: »
Bayesian optimization is a sequential decision making framework for optimizing expensive-to-evaluate black-box functions. Computing a full lookahead policy amounts to solving a highly intractable stochastic dynamic program. Myopic approaches, such as expected improvement, are often adopted in practice, but they ignore the long-term impact of the immediate decision. Existing nonmyopic approaches are mostly heuristic and/or computationally expensive. In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree. Instead of solving these problems in a nested way, we equivalently optimize all decision variables in the full tree jointly, in a "one-shot" fashion. Combining this with an efficient method for implementing multi-step Gaussian process "fantasization," we demonstrate that multi-step expected improvement is computationally tractable and exhibits performance superior to existing methods on a wide range of benchmarks.
Author Information
Shali Jiang (Facebook)
Daniel Jiang (Facebook)
Maximilian Balandat (Facebook)
Brian Karrer (Facebook)
Jacob Gardner (University of Pennsylvania)
Roman Garnett (Washington University in St. Louis)
More from the Same Authors
-
2021 : Optimizing High-Dimensional Physics Simulations via Composite Bayesian Optimization »
Wesley Maddox · Qing Feng · Maximilian Balandat -
2021 Poster: Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs »
Raul Astudillo · Daniel Jiang · Maximilian Balandat · Eytan Bakshy · Peter Frazier -
2021 Poster: Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement »
Samuel Daulton · Maximilian Balandat · Eytan Bakshy -
2021 Poster: Bayesian Optimization with High-Dimensional Outputs »
Wesley Maddox · Maximilian Balandat · Andrew Wilson · Eytan Bakshy -
2021 Poster: Scaling Gaussian Processes with Derivative Information Using Variational Inference »
Misha Padidar · Xinran Zhu · Leo Huang · Jacob Gardner · David Bindel -
2020 Poster: Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization »
Samuel Daulton · Maximilian Balandat · Eytan Bakshy -
2020 Poster: Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization »
Geoff Pleiss · Martin Jankowiak · David Eriksson · Anil Damle · Jacob Gardner -
2020 Poster: BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization »
Maximilian Balandat · Brian Karrer · Daniel Jiang · Samuel Daulton · Ben Letham · Andrew Wilson · Eytan Bakshy -
2019 Poster: Cost Effective Active Search »
Shali Jiang · Roman Garnett · Benjamin Moseley -
2019 Poster: D-VAE: A Variational Autoencoder for Directed Acyclic Graphs »
Muhan Zhang · Shali Jiang · Zhicheng Cui · Roman Garnett · Yixin Chen -
2018 Poster: Efficient nonmyopic batch active search »
Shali Jiang · Gustavo Malkomes · Matthew Abbott · Benjamin Moseley · Roman Garnett -
2018 Spotlight: Efficient nonmyopic batch active search »
Shali Jiang · Gustavo Malkomes · Matthew Abbott · Benjamin Moseley · Roman Garnett -
2018 Poster: Automating Bayesian optimization with Bayesian optimization »
Gustavo Malkomes · Roman Garnett -
2016 Poster: Minimizing Regret on Reflexive Banach Spaces and Nash Equilibria in Continuous Zero-Sum Games »
Maximilian Balandat · Walid Krichene · Claire Tomlin · Alexandre Bayen -
2016 Poster: Bayesian optimization for automated model selection »
Gustavo Malkomes · Charles Schaff · Roman Garnett -
2015 : *Roman Garnett* Bayesian Quadrature: Lessons Learned and Looking Forwards »
Roman Garnett -
2015 Poster: Bayesian Active Model Selection with an Application to Automated Audiometry »
Jacob Gardner · Gustavo Malkomes · Roman Garnett · Kilian Weinberger · Dennis Barbour · John Cunningham -
2014 Poster: Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature »
Tom Gunter · Michael A Osborne · Roman Garnett · Philipp Hennig · Stephen J Roberts -
2013 Poster: Σ-Optimality for Active Learning on Gaussian Random Fields »
Yifei Ma · Roman Garnett · Jeff Schneider