There are many natural approximations that can be used within statistical learning. For example, in MCMC we could use a numerical or Monte Carlo approximation to the acceptance probability in cases where the target distribution cannot be written down (even up to a constant of proportionality). Or when sampling from an infinite-dimensional distribution, for example in Bayesian non-parametrics, we can use a finite-dimensional approximation (e.g. by truncating the tail of the true distribution). Recent work has shown that, in some cases, we can make these "approximations" and yet the underlying methods will still be "exact". So our MCMC algorithm will still have the correct target distribution, or we will still be drawing samples from the true infinite dimensional distributions.
Informally, the key idea behind these "exact approximate" methods is that we are able to randomise the approximation so as to average it away. This tutorial will cover the two main examples of "exact approximate" methods: the pseudo-marginal approach and retrospective sampling. The ideas will be demonstrated on examples taken from Bayesian non-parametrics, changepoint detection and diffusions.
Paul Fearnhead (Lancaster University)
Paul Fearnhead is Professor of Statistics at Lancaster University. He received his DPhil in Statistics from the University of Oxford in 1998; was a postdoctoral researcher at the University of Oxford until 2001; and then moved to the University of Lancaster, initially as a Lecturer in Statistics. He has worked on Monte Carlo methods within Bayesian statistics, including applications in population genetics, changepoint detection and inference for diffusions. He was awarded the Royal Statistical Society's Guy medal in Bronze in 2007, and Cambridge University's Adams Prize in 2006.
More from the Same Authors
2018 Poster: Large-Scale Stochastic Sampling from the Probability Simplex »
Jack Baker · Paul Fearnhead · Emily Fox · Christopher Nemeth