Timezone: »
Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization (MOBO) is a sample-efficient approach for identifying the optimal trade-offs between the objectives. However, many existing methods perform poorly when the observations are corrupted by noise. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian treatment to the popular expected hypervolume improvement (EHVI) criterion and integrating over this uncertainty in the Pareto frontier. We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique. Through this lens, we derive a natural parallel variant, qNEHVI, that reduces computational complexity of parallel EHVI from exponential to polynomial with respect to the batch size. qNEHVI is one-step Bayes-optimal for hypervolume maximization in both noisy and noiseless environments, and we show that it can be optimized effectively with gradient-based methods via sample average approximation. Empirically, we demonstrate not only that qNEHVI is substantially more robust to observation noise than existing MOBO approaches, but also that it achieves state-of-the-art optimization performance and competitive wall-times in large-batch environments.
Author Information
Samuel Daulton (Meta, University of Oxford)
Research Scientist at Meta, PhD Candidate at Oxford. My research focuses on Bayesian optimization.
Maximilian Balandat (Meta)
Eytan Bakshy (Meta)
More from the Same Authors
-
2021 : Practical Policy Optimization with PersonalizedExperimentation »
Mia Garrard · Hanson Wang · Ben Letham · Zehui Wang · Yin Huang · Yichun Hu · Chad Zhou · Norm Zhou · Eytan Bakshy -
2022 : Sparse Bayesian Optimization »
Sulin Liu · Qing Feng · David Eriksson · Ben Letham · Eytan Bakshy -
2022 : One-Shot Optimal Design for Gaussian Process Analysis of Randomized Experiments »
Jelena Markovic · Qing Feng · Eytan Bakshy -
2022 : Panel »
Roman Garnett · José Miguel Hernández-Lobato · Eytan Bakshy · Syrine Belakaria · Stefanie Jegelka -
2022 Poster: Log-Linear-Time Gaussian Processes Using Binary Tree Kernels »
Michael K. Cohen · Samuel Daulton · Michael A Osborne -
2022 Poster: Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization »
Samuel Daulton · Xingchen Wan · David Eriksson · Maximilian Balandat · Michael A Osborne · Eytan Bakshy -
2021 Poster: Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs »
Raul Astudillo · Daniel Jiang · Maximilian Balandat · Eytan Bakshy · Peter Frazier -
2021 Poster: Bayesian Optimization with High-Dimensional Outputs »
Wesley Maddox · Maximilian Balandat · Andrew Wilson · Eytan Bakshy -
2020 : Contributed Talk 7: Distilled Thompson Sampling: Practical and Efficient Thompson Sampling via Imitation Learning »
Samuel Daulton · Hongseok Namkoong -
2020 Poster: Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization »
Samuel Daulton · Maximilian Balandat · Eytan Bakshy -
2020 Poster: BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization »
Maximilian Balandat · Brian Karrer · Daniel Jiang · Samuel Daulton · Ben Letham · Andrew Wilson · Eytan Bakshy -
2020 Poster: Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization »
Ben Letham · Roberto Calandra · Akshara Rai · Eytan Bakshy -
2020 Poster: High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization »
Qing Feng · Ben Letham · Hongzi Mao · Eytan Bakshy -
2020 Spotlight: High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization »
Qing Feng · Ben Letham · Hongzi Mao · Eytan Bakshy -
2019 : Invited Speaker: Eytan Bakshy »
Eytan Bakshy -
2017 Poster: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2017 Oral: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2016 Poster: Minimizing Regret on Reflexive Banach Spaces and Nash Equilibria in Continuous Zero-Sum Games »
Maximilian Balandat · Walid Krichene · Claire Tomlin · Alexandre Bayen