Skip to yearly menu bar Skip to main content


Poster

Gradient-based Adaptive Markov Chain Monte Carlo

Michalis Titsias · Petros Dellaportas

East Exhibition Hall B + C #179

Keywords: [ Optimization -> Stochastic Optimization; Probabilistic Methods ] [ Variational Inference ] [ MCMC ] [ Probabilistic Methods ]


Abstract:

We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets. We define a maximum entropy regularised objective function, referred to as generalised speed measure, which can be robustly optimised over the parameters of the proposal distribution by applying stochastic gradient optimisation. An advantage of our method compared to traditional adaptive MCMC methods is that the adaptation occurs even when candidate state values are rejected. This is a highly desirable property of any adaptation strategy because the adaptation starts in early iterations even if the initial proposal distribution is far from optimum. We apply the framework for learning multivariate random walk Metropolis and Metropolis-adjusted Langevin proposals with full covariance matrices, and provide empirical evidence that our method can outperform other MCMC algorithms, including Hamiltonian Monte Carlo schemes.

Live content is unavailable. Log in and register to view live content