Skip to yearly menu bar Skip to main content


Poster

Gaussian Process Bandit Optimisation with Multi-fidelity Evaluations

Kirthevasan Kandasamy · Gautam Dasarathy · Junier B Oliva · Jeff Schneider · Barnabas Poczos

Area 5+6+7+8 #122

Keywords: [ Bandit Algorithms ] [ Gaussian Processes ]


Abstract: In many scientific and engineering applications, we are tasked with the optimisation of an expensive to evaluate black box function $\func$. Traditional methods for this problem assume just the availability of this single function. However, in many cases, cheap approximations to $\func$ may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of $\func$ in a small but promising region and speedily identify the optimum. We formalise this task as a \emph{multi-fidelity} bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop \mfgpucb, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour, and achieves better regret than strategies which ignore multi-fidelity information. \mfgpucbs outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments.

Live content is unavailable. Log in and register to view live content