Tree ensemble kernels for Bayesian optimization with known constraints over mixed-feature spaces

Alexander Thebelt · Calvin Tsay · Robert Lee · Nathan Sudermann-Merx · David Walz · Behrang Shafei · Ruth Misener

Hall J #115

Keywords: [ Black-box Optimization ] [ Bayesian optimization ] [ Hybrid Spaces ] [ Known Constraints ] [ Mixed-Variable Spaces ] [ Tree Ensembles ] [ Global Optimization ]

[ Abstract ]
[ Paper [ Poster [ OpenReview
Tue 29 Nov 9 a.m. PST — 11 a.m. PST


Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search, as they achieve good predictive performance with little or no manual tuning, naturally handle discrete feature spaces, and are relatively insensitive to outliers in the training data. Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function. To address both points simultaneously, we propose using the kernel interpretation of tree ensembles as a Gaussian Process prior to obtain model variance estimates, and we develop a compatible optimization formulation for the acquisition function. The latter further allows us to seamlessly integrate known constraints to improve sampling efficiency by considering domain-knowledge in engineering settings and modeling search space symmetries, e.g., hierarchical relationships in neural architecture search. Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.

Chat is not available.