Timezone: »
A key task in Bayesian machine learning is sampling from distributions that are only specified up to a partition function (i.e., constant of proportionality). One prevalent example of this is sampling posteriors in parametric distributions, such as latent-variable generative models. However sampling (even very approximately) can be #P-hard.
Classical results (going back to Bakry and Emery) on sampling focus on log-concave distributions, and show a natural Markov chain called Langevin diffusion mix in polynomial time. However, all log-concave distributions are uni-modal, while in practice it is very common for the distribution of interest to have multiple modes. In this case, Langevin diffusion suffers from torpid mixing.
We address this problem by combining Langevin diffusion with simulated tempering. The result is a Markov chain that mixes more rapidly by transitioning between different temperatures of the distribution. We analyze this Markov chain for a mixture of (strongly) log-concave distributions of the same shape. In particular, our technique applies to the canonical multi-modal distribution: a mixture of gaussians (of equal variance). Our algorithm efficiently samples from these distributions given only access to the gradient of the log-pdf. To the best of our knowledge, this is the first result that proves fast mixing for multimodal distributions.
Author Information
Holden Lee (Princeton)
Andrej Risteski (MIT)
Rong Ge (Duke University)
More from the Same Authors
-
2022 Poster: Outlier-Robust Sparse Estimation via Non-Convex Optimization »
Yu Cheng · Ilias Diakonikolas · Rong Ge · Shivam Gupta · Daniel Kane · Mahdi Soltanolkotabi -
2021 Poster: Understanding Deflation Process in Over-parametrized Tensor Decomposition »
Rong Ge · Yunwei Ren · Xiang Wang · Mo Zhou -
2021 Poster: A Regression Approach to Learning-Augmented Online Algorithms »
Keerti Anand · Rong Ge · Amit Kumar · Debmalya Panigrahi -
2021 Poster: Universal Approximation Using Well-Conditioned Normalizing Flows »
Holden Lee · Chirag Pabbaraju · Anish Prasad Sevekari · Andrej Risteski -
2020 Poster: Beyond Lazy Training for Over-parameterized Tensor Decomposition »
Xiang Wang · Chenwei Wu · Jason Lee · Tengyu Ma · Rong Ge -
2019 Poster: Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets »
Rohith Kuditipudi · Xiang Wang · Holden Lee · Yi Zhang · Zhiyuan Li · Wei Hu · Rong Ge · Sanjeev Arora -
2019 Poster: The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares »
Rong Ge · Sham Kakade · Rahul Kidambi · Praneeth Netrapalli -
2018 Poster: On the Local Minima of the Empirical Risk »
Chi Jin · Lydia T. Liu · Rong Ge · Michael Jordan -
2018 Spotlight: On the Local Minima of the Empirical Risk »
Chi Jin · Lydia T. Liu · Rong Ge · Michael Jordan -
2018 Poster: Spectral Filtering for General Linear Dynamical Systems »
Elad Hazan · Holden Lee · Karan Singh · Cyril Zhang · Yi Zhang -
2018 Oral: Spectral Filtering for General Linear Dynamical Systems »
Elad Hazan · Holden Lee · Karan Singh · Cyril Zhang · Yi Zhang -
2017 Poster: On the Optimization Landscape of Tensor Decompositions »
Rong Ge · Tengyu Ma -
2017 Oral: On the Optimization Landscape of Tensor Decompositions »
Rong Ge · Tengyu Ma