Skip to yearly menu bar Skip to main content


Tutorial

Advances in Gaussian Processes

Carl Edward Rasmussen

Regency E

Abstract:

Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. Although these models have a long history in statistics, their potential has only become widely appreciated in the machine learning community during the past decade. This tutorial will introduce GPs, their application to regression and classification, and outline recent computational developments. GPs are a natural framework for Bayesian inference about functions, providing full predictive distributions and a principled framework for inference, including model selection. The prior over functions is given in a hierarchical form, where the covariance function (or kernel) controls the properties of the functions in a way which allows interpretation of the model. Whereas inference in the simplest regression case can be done in closed form, inference in classification models is intractable. Several approximations have been proposed, e.g. the Expectation Propagation algorithm. A central limitation in the applicability of GPs to problems with large numbers of examples is that naïve implementations scale with the square and cube of the number of examples for memory and time respectively, making direct treatment of more than a few thousand cases inconvenient. Recent work on sparse approximations addresses these issues.

Chat is not available.