Timezone: »

Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs
Raul Astudillo · Daniel Jiang · Maximilian Balandat · Eytan Bakshy · Peter Frazier

Wed Dec 08 04:30 PM -- 06:00 PM (PST) @

Bayesian optimization (BO) is a sample-efficient approach to optimizing costly-to-evaluate black-box functions. Most BO methods ignore how evaluation costs may vary over the optimization domain. However, these costs can be highly heterogeneous and are often unknown in advance in many practical settings, such as hyperparameter tuning of machine learning algorithms or physics-based simulation optimization. Moreover, those few existing methods that acknowledge cost heterogeneity do not naturally accommodate a budget constraint on the total evaluation cost. This combination of unknown costs and a budget constraint introduces a new dimension to the exploration-exploitation trade-off, where learning about the cost incurs a cost itself. Existing methods do not reason about the various trade-offs of this problem in a principled way, leading often to poor performance. We formalize this claim by proving that the expected improvement and the expected improvement per unit of cost, arguably the two most widely used acquisition functions in practice, can be arbitrarily inferior with respect to the optimal non-myopic policy. To overcome the shortcomings of existing approaches, we propose the budgeted multi-step expected improvement, a non-myopic acquisition function that generalizes classical expected improvement to the setting of heterogeneous and unknown evaluation costs. We show that our acquisition function outperforms existing methods in a variety of synthetic and real problems.

Author Information

Raul Astudillo (Cornell University)

I am a Ph.D. candidate in the School of Operations Research and Information Engineering at Cornell University, where I am fortunate to be advised by Professor Peter Frazier. Prior to coming to Cornell, I completed the undergraduate program in Mathematics offered jointly by the University of Guanajuato and the Center for Research in Mathematics. My current research focuses on the design and analysis of Bayesian optimization algorithms for problems with nested objective functions.

Daniel Jiang (Facebook)
Maximilian Balandat (Meta)
Eytan Bakshy (Meta)
Peter Frazier (Cornell / Uber)

Peter Frazier is an Associate Professor in the School of Operations Research and Information Engineering at Cornell University, and a Staff Data Scientist at Uber. He received a Ph.D. in Operations Research and Financial Engineering from Princeton University in 2009. His research is at the intersection of machine learning and operations research, focusing on Bayesian optimization, multi-armed bandits, active learning, and Bayesian nonparametric statistics. He is an associate editor for Operations Research, ACM TOMACS, and IISE Transactions, and is the recipient of an AFOSR Young Investigator Award and an NSF CAREER Award.

More from the Same Authors