Timezone: »
Bayesian optimization (BO) is a powerful approach for optimizing black-box, expensive-to-evaluate functions. To enable a flexible trade-off between the cost and accuracy, many applications allow the function to be evaluated at different fidelities. In order to reduce the optimization cost while maximizing the benefit-cost ratio, in this paper we propose Batch Multi-fidelity Bayesian Optimization with Deep Auto-Regressive Networks (BMBO-DARN). We use a set of Bayesian neural networks to construct a fully auto-regressive model, which is expressive enough to capture strong yet complex relationships across all the fidelities, so as to improve the surrogate learning and optimization performance. Furthermore, to enhance the quality and diversity of queries, we develop a simple yet efficient batch querying method, without any combinatorial search over the fidelities. We propose a batch acquisition function based on Max-value Entropy Search (MES) principle, which penalizes highly correlated queries and encourages diversity. We use posterior samples and moment matching to fulfill efficient computation of the acquisition function, and conduct alternating optimization over every fidelity-input pair, which guarantees an improvement at each step. We demonstrate the advantage of our approach on four real-world hyperparameter optimization applications.
Author Information
Shibo Li (University of Utah)
Robert Kirby (University of Utah)
Shandian Zhe (University of Utah)
More from the Same Authors
-
2022 Spotlight: Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm »
Aidan Good · Jiaqi Lin · Xin Yu · Hannah Sieg · Mikey Fergurson · Shandian Zhe · Jerzy Wieczorek · Thiago Serra -
2022 Poster: Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm »
Aidan Good · Jiaqi Lin · Xin Yu · Hannah Sieg · Mikey Fergurson · Shandian Zhe · Jerzy Wieczorek · Thiago Serra -
2022 Poster: Infinite-Fidelity Coregionalization for Physical Simulation »
Shibo Li · Zheng Wang · Robert Kirby · Shandian Zhe -
2022 Poster: Batch Multi-Fidelity Active Learning with Budget Constraints »
Shibo Li · Jeff M Phillips · Xin Yu · Robert Kirby · Shandian Zhe -
2021 Poster: Self-Adaptable Point Processes with Nonparametric Time Decays »
Zhimeng Pan · Zheng Wang · Jeff M Phillips · Shandian Zhe -
2021 Poster: Characterizing possible failure modes in physics-informed neural networks »
Aditi Krishnapriyan · Amir Gholami · Shandian Zhe · Robert Kirby · Michael Mahoney -
2020 Poster: Multi-Fidelity Bayesian Optimization via Deep Neural Networks »
Shibo Li · Wei Xing · Robert Kirby · Shandian Zhe -
2018 Poster: Stochastic Nonparametric Event-Tensor Decomposition »
Shandian Zhe · Yishuai Du -
2018 Spotlight: Stochastic Nonparametric Event-Tensor Decomposition »
Shandian Zhe · Yishuai Du