Timezone: »

Bayesian Optimization with High-Dimensional Outputs
Wesley Maddox · Maximilian Balandat · Andrew Wilson · Eytan Bakshy

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

Bayesian optimization is a sample-efficient black-box optimization procedure that is typically applied to a small number of independent objectives. However, in practice we often wish to optimize objectives defined over many correlated outcomes (or “tasks”). For example, scientists may want to optimize the coverage of a cell tower network across a dense grid of locations. Similarly, engineers may seek to balance the performance of a robot across dozens of different environments via constrained or robust optimization. However, the Gaussian Process (GP) models typically used as probabilistic surrogates for multi-task Bayesian optimization scale poorly with the number of outcomes, greatly limiting applicability. We devise an efficient technique for exact multi-task GP sampling that combines exploiting Kronecker structure in the covariance matrices with Matheron’s identity, allowing us to perform Bayesian optimization using exact multi-task GP models with tens of thousands of correlated outputs. In doing so, we achieve substantial improvements in sample efficiency compared to existing approaches that model solely the outcome metrics. We demonstrate how this unlocks a new class of applications for Bayesian optimization across a range of tasks in science and engineering, including optimizing interference patterns of an optical interferometer with 65,000 outputs.

Author Information

Wesley Maddox (New York University)
Maximilian Balandat (University of California, Berkeley)
Andrew Wilson (New York University)

I am a professor of machine learning at New York University.

Eytan Bakshy (Facebook)

More from the Same Authors