Poster
The Randomized Midpoint Method for Log-Concave Sampling
Ruoqi Shen · Yin Tat Lee
East Exhibition Hall B, C #163
Keywords: [ Optimization ] [ Probabilistic Methods ] [ MCMC ]
[
Abstract
]
Abstract:
Sampling from log-concave distributions is a well researched problem
that has many applications in statistics and machine learning. We
study the distributions of the form p∗∝exp(−f(x))p∗∝exp(−f(x)), where
f:Rd→R has an L-Lipschitz gradient
and is m-strongly convex. In our paper, we propose a Markov chain
Monte Carlo (MCMC) algorithm based on the underdamped Langevin diffusion
(ULD). It can achieve ϵ⋅D error (in 2-Wasserstein distance)
in ˜O(κ7/6/ϵ1/3+κ/ϵ2/3)
steps, where Ddef=√dm is the effective diameter
of the problem and κdef=Lm is the condition number. Our algorithm performs significantly faster than the previously best
known algorithm for solving this problem, which requires ˜O(κ1.5/ϵ)
steps \cite{chen2019optimal,dalalyan2018sampling}. Moreover, our
algorithm can be easily parallelized to require only O(κlog1ϵ)
parallel steps.
To solve the sampling problem, we propose a new framework to discretize
stochastic differential equations. We apply this framework to discretize
and simulate ULD, which converges to the target distribution p∗.
The framework can be used to solve not only the log-concave sampling
problem, but any problem that involves simulating (stochastic) differential
equations.
Live content is unavailable. Log in and register to view live content