Timezone: »

 
Poster
Stochastic Gradient Richardson-Romberg Markov Chain Monte Carlo
Alain Durmus · Umut Simsekli · Eric Moulines · Roland Badeau · Gaël RICHARD

Mon Dec 05 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #124

Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) algorithms have become increasingly popular for Bayesian inference in large-scale applications. Even though these methods have proved useful in several scenarios, their performance is often limited by their bias. In this study, we propose a novel sampling algorithm that aims to reduce the bias of SG-MCMC while keeping the variance at a reasonable level. Our approach is based on a numerical sequence acceleration method, namely the Richardson-Romberg extrapolation, which simply boils down to running almost the same SG-MCMC algorithm twice in parallel with different step sizes. We illustrate our framework on the popular Stochastic Gradient Langevin Dynamics (SGLD) algorithm and propose a novel SG-MCMC algorithm referred to as Stochastic Gradient Richardson-Romberg Langevin Dynamics (SGRRLD). We provide formal theoretical analysis and show that SGRRLD is asymptotically consistent, satisfies a central limit theorem, and its non-asymptotic bias and the mean squared-error can be bounded. Our results show that SGRRLD attains higher rates of convergence than SGLD in both finite-time and asymptotically, and it achieves the theoretical accuracy of the methods that are based on higher-order integrators. We support our findings using both synthetic and real data experiments.

Author Information

Alain Durmus (Telecom ParisTech)
Umut Simsekli (Inria Paris / ENS)
Eric Moulines (Ecole Polytechnique)
Roland Badeau (Telecom ParisTech)
Gaël RICHARD (Telecom ParisTech)

More from the Same Authors