Timezone: »

Fantasizing with Dual GPs in Bayesian Optimization and Active Learning
Paul Chang · Prakhar Verma · ST John · Victor Picheny · Henry Moss · Arno Solin

Gaussian Processes (GPs) are popular surrogate models for sequential decision making tasks such as Bayesian Optimization and Active Learning. Such frameworks often exploit well-known cheap methods for conditioning a GP posterior on new data. However, these standard methods cannot be applied to popular but more complex models such as sparse GPs or for non-conjugate likelihoods due to a lack of such update formulas. Using an alternative sparse Dual GP parameterization, we show that these costly computations can be avoided, whilst enjoying one-step updates for non-Gaussian likelihoods. The resulting algorithms allow for cheap batch formulations that work with most acquisition functions.

Author Information

Paul Chang (Aalto University)

A machine learning researcher working in the Arno Solin group at Aalto University. Looking at probabilistic modelling specifically Gaussian Processes and methods to speed up inference.

Prakhar Verma (Aalto University)
ST John (Aalto University & Finnish Center for Artificial Intelligence)
Victor Picheny (Prowler)
Henry Moss (Secondmind)

I am a Senior Machine Learning Researcher at Secondmind (formerly PROWLER.io). I leverage information-theoretic arguments to provide efficient, reliable and scalable Bayesian optimisation for problems inspired by science and the automotive industry.

Arno Solin (Aalto University)

More from the Same Authors