Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2022 Workshop on Score-Based Methods

Convergence in KL and Rényi Divergence of the Unadjusted Langevin Algorithm Using Estimated Score

Kaylee Y. Yang · Andre Wibisono


Abstract: We study the Unadjusted Langevin Algorithm (ULA) for sampling using an estimated score function when the target distribution satisfies log-Sobolev inequality (LSI), motivated by Score-based Generative Modeling (SGM). We prove convergence in Kullback-Leibler (KL) divergence under a minimal sufficient assumption on the error of score estimator called bounded Moment Generating Function (MGF) assumption. Our assumption is weaker than the previous assumption which requires finite $L^\infty$ norm of the error. Under the $L^\infty$ error assumption, we also prove convergence in R\'enyi divergence, which is stronger than KL divergence. On the other hand, under $L^p$ error assumption for any $1 \leq p < \infty$ which is weaker than bounded MGF assumption, we show that the stationary distribution of Langevin dynamics with a $L^p$-accurate score estimator can be arbitrarily far away from the desired distribution. Thus having a $L^p$-accurate score estimator cannot guarantee convergence. Our results suggest controlling mean squared error which is the form of commonly used loss function when using neural network to estimate score function is not enough to guarantee the upstream algorithm will converge, hence in order to get a theoretical guarantee we need a stronger control over the error in score matching. Despite requiring an exponentially decaying error probability, we give an example to demonstrate the bounded MGF assumption is achievable when using Kernel Density Estimation (KDE)-based score estimator.

Chat is not available.