Moderators: Fariba Yousefi · ST John
Gaussian processes (GP) are Bayesian nonparametric models for continuous functions which allow for uncertainty quantification, interpretability, and the incorporation of expert knowledge. The theory and practice of GPs have flourished in the last decade, where researchers have looked into the expressiveness and efficiency of GP-based models and practitioners have applied them to a plethora of disciplines. This tutorial presents both the foundational theory and modern developments of data modelling using GPs, following step by step intuitions, illustrations and real-world examples. The tutorial will start with emphasis on the building blocks of the GP model, to then move onto the choice of the kernel function, cost-effective training strategies and non-Gaussian extensions. The second part of the tutorial will showcase more recent advances, such as latent variable models, deep GPs, current trends on kernel design and connections between GPs and deep neural networks. We hope that this exhibition, featuring classic and contemporary GP works, inspires attendees to incorporate GPs into their applications and motivates them to continue learning and contributing to the current developments in the field.
Mon 9:00 a.m. - 9:05 a.m.
|
Live Intro
(
Intro
)
SlidesLive Video » Welcome message, presentation of the speakers and brief overview of the talk. |
Felipe Tobar · César Lincoln Mattos 🔗 |
Mon 9:05 a.m. - 9:20 a.m.
|
Fundamentals of Bayesian Inference
(
Talk
)
SlidesLive Video » We revise the main components of Bayesian inference underpinning the treatment of the Gaussian process. Adopting an illustrative approach, we review statistical models, the approaches to learning the model parameters and the effect of the parameters’ cardinality. |
Felipe Tobar 🔗 |
Mon 9:20 a.m. - 9:35 a.m.
|
From One to Finitely-Many Gaussian RVs
(
Talk
)
SlidesLive Video » We motivate the use of Gaussian random variables (RVs) from conceptual and practical perspectives. We examine the main properties of the Gaussian model and show how Gaussian RVs can be combined to construct Gaussian vectors |
Felipe Tobar 🔗 |
Mon 9:35 a.m. - 9:50 a.m.
|
Q&A
SlidesLive Video » |
🔗 |
Mon 9:50 a.m. - 10:10 a.m.
|
Infinitely-many Gaussian RVs: The Gaussian process
(
Talk
)
SlidesLive Video » We study the covariance function and its marginalisation properties to motivate the existence of an infinite collection of Gaussian RVs, that is, the Gaussian process. We then present the prior and posterior GP in an illustrative manner. |
Felipe Tobar 🔗 |
Mon 10:10 a.m. - 10:25 a.m.
|
Choosing the Kernel and Learning its Parameters
(
Talk
)
SlidesLive Video » Building on the use of a Gaussian likelihood, we present a learning objective and refer to the problem of choosing a kernel and optimising its parameters. We revise different kernel functions and their properties. |
Felipe Tobar 🔗 |
Mon 10:25 a.m. - 10:35 a.m.
|
Implementation of a GP
(
Talk
)
SlidesLive Video » We show how to implement a vanilla GP from scratch, we refer to the particulars of coding the kernel, the likelihood function and the train() function. We apply this minimal GP toolbox to a couple of datasets to illustrate the GP’s ease of use and modelling abilities |
Felipe Tobar 🔗 |
Mon 10:35 a.m. - 10:50 a.m.
|
Q&A
SlidesLive Video » |
🔗 |
Mon 10:50 a.m. - 11:00 a.m.
|
Break
|
🔗 |
Mon 11:00 a.m. - 11:15 a.m.
|
Beyond Gaussian Likelihood
(
Talk
)
SlidesLive Video » We expand the applicability of GP models by introducing non-Gaussian likelihoods. We show why inference becomes intractable and how it can be approximated. Via the use of illustrations, we focus on the binary classification learning scenario, warping functions, and heteroscedastic models. |
César Lincoln Mattos 🔗 |
Mon 11:15 a.m. - 11:37 a.m.
|
Sparse Approximations
(
Talk
)
SlidesLive Video » Standard GP presents scalability issues, thus, in this section we present how sparse approximations enable the use of GPs for large datasets. We consider, in particular, the variational approach based on inducing variables, which results in tractable approximations for performing predictions and model selection via optimization of the evidence lower bound (ELBO). |
César Lincoln Mattos 🔗 |
Mon 11:37 a.m. - 11:52 a.m.
|
Q&A
SlidesLive Video » |
🔗 |
Mon 11:52 a.m. - 12:05 p.m.
|
Current Trends on Kernel Design
(
Talk
)
SlidesLive Video » We highlight some recent advances to kernel design that enhance the use of GP models in complex data settings. From automatic composition of simple kernels to deep kernel learning and (graph) convolutional kernels, the works revised in this section indicate how fast-paced the research on kernel design is. |
César Lincoln Mattos 🔗 |
Mon 12:05 p.m. - 12:27 p.m.
|
From GPLVM to Deep GPs
(
Talk
)
SlidesLive Video » We consider the unsupervised setting, more specifically, the task of nonlinear dimensionality reduction. We summarize the GP latent variable model (GPLVM) and indicate how it can be used as a building block for other generative models, such as the deep GP and other flexible probabilistic models. |
César Lincoln Mattos 🔗 |
Mon 12:27 p.m. - 12:42 p.m.
|
Multioutput GPs
(
Talk
)
SlidesLive Video » We motivate the need for MOGPs and show how they can be constructed by mixing independent GPs. Then, we revise standard approaches to covariance design and their implications. We conclude this section with real-world examples. |
Felipe Tobar 🔗 |
Mon 12:42 p.m. - 12:47 p.m.
|
Concluding Remarks
(
Talk
)
SlidesLive Video » Final remarks: the take-home message, what has been left out of the tutorial and proposed avenues for further study |
Felipe Tobar · César Lincoln Mattos 🔗 |
Mon 12:47 p.m. - 1:00 p.m.
|
Q&A
SlidesLive Video » |
🔗 |