Skip to yearly menu bar Skip to main content


Tutorial

Exact Approximate Learning

Paul Fearnhead

Emerald Bay A, Harveys Convention Center Floor (CC)

Abstract:

There are many natural approximations that can be used within statistical learning. For example, in MCMC we could use a numerical or Monte Carlo approximation to the acceptance probability in cases where the target distribution cannot be written down (even up to a constant of proportionality). Or when sampling from an infinite-dimensional distribution, for example in Bayesian non-parametrics, we can use a finite-dimensional approximation (e.g. by truncating the tail of the true distribution). Recent work has shown that, in some cases, we can make these "approximations" and yet the underlying methods will still be "exact". So our MCMC algorithm will still have the correct target distribution, or we will still be drawing samples from the true infinite dimensional distributions.

Informally, the key idea behind these "exact approximate" methods is that we are able to randomise the approximation so as to average it away. This tutorial will cover the two main examples of "exact approximate" methods: the pseudo-marginal approach and retrospective sampling. The ideas will be demonstrated on examples taken from Bayesian non-parametrics, changepoint detection and diffusions.

Chat is not available.