Skip to yearly menu bar Skip to main content


Bayesian optimization under mixed constraints with a slack-variable augmented Lagrangian

Victor Picheny · Robert B Gramacy · Stefan Wild · Sebastien Le Digabel

Area 5+6+7+8 #33

Keywords: [ Gaussian Processes ] [ Active Learning ] [ (Other) Optimization ]


An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e.g., unconstrained) problems which are then usually solved with local solvers. Recently, surrogate-based Bayesian optimization (BO) sub-solvers have been successfully deployed in the AL framework for a more global search in the presence of inequality constraints; however a drawback was that expected improvement (EI) evaluations relied on Monte Carlo. Here we introduce an alternative slack variable AL, and show that in this formulation the EI may be evaluated with library routines. The slack variables furthermore facilitate equality as well as inequality constraints, and mixtures thereof. We show our new slack "ALBO" compares favorably to the original. Its superiority over conventional alternatives is reinforced on several new mixed constraint examples.

Live content is unavailable. Log in and register to view live content