Timezone: »
Two commonly arising computational tasks in Bayesian learning are Optimization (Maximum A Posteriori estimation) and Sampling (from the posterior distribution). In the convex case these two problems are efficiently reducible to each other. Recent work (Ma et al. 2019) shows that in the non-convex case, sampling can sometimes be provably faster. We present a simpler and stronger separation. We then compare sampling and optimization in more detail and show that they are provably incomparable: there are families of continuous functions for which optimization is easy but sampling is NP-hard, and vice versa. Further, we show function families that exhibit a sharp phase transition in the computational complexity of sampling, as one varies the natural temperature parameter. Our results draw on a connection to analogous separations in the discrete setting which are well-studied.
Author Information
Kunal Talwar (Google)
More from the Same Authors
-
2022 Panel: Panel 1C-5: Privacy of Noisy… & Near-Optimal Private and… »
Shyam Narayanan · Kunal Talwar -
2022 Poster: Mean Estimation with User-level Privacy under Data Heterogeneity »
Rachel Cummings · Vitaly Feldman · Audra McMillan · Kunal Talwar -
2022 Poster: FLAIR: Federated Learning Annotated Image Repository »
Congzheng Song · Filip Granqvist · Kunal Talwar -
2022 Poster: Subspace Recovery from Heterogeneous Data with Non-isotropic Noise »
John Duchi · Vitaly Feldman · Lunjia Hu · Kunal Talwar -
2022 Poster: Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss »
Jason Altschuler · Kunal Talwar -
2020 Poster: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2020 Spotlight: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2020 Poster: Stochastic Optimization with Laggard Data Pipelines »
Naman Agarwal · Rohan Anil · Tomer Koren · Kunal Talwar · Cyril Zhang -
2020 Poster: Faster Differentially Private Samplers via Rényi Divergence Analysis of Discretized Langevin MCMC »
Arun Ganesh · Kunal Talwar -
2020 Poster: On the Error Resistance of Hinge-Loss Minimization »
Kunal Talwar -
2019 : Private Stochastic Convex Optimization: Optimal Rates in Linear Time »
Vitaly Feldman · Tomer Koren · Kunal Talwar -
2019 Poster: Private Stochastic Convex Optimization with Optimal Rates »
Raef Bassily · Vitaly Feldman · Kunal Talwar · Abhradeep Guha Thakurta -
2019 Spotlight: Private Stochastic Convex Optimization with Optimal Rates »
Raef Bassily · Vitaly Feldman · Kunal Talwar · Abhradeep Guha Thakurta