Timezone: »
The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits. Since it is defined as an expectation over random projections, SW is commonly approximated by Monte Carlo. We adopt a new perspective to approximate SW by making use of the concentration of measure phenomenon: under mild assumptions, one-dimensional projections of a high-dimensional random vector are approximately Gaussian. Based on this observation, we develop a simple deterministic approximation for SW. Our method does not require sampling a number of random projections, and is therefore both accurate and easy to use compared to the usual Monte Carlo approximation. We derive nonasymptotical guarantees for our approach, and show that the approximation error goes to zero as the dimension increases, under a weak dependence condition on the data distribution. We validate our theoretical findings on synthetic datasets, and illustrate the proposed approximation on a generative modeling problem.
Author Information
Kimia Nadjahi (Télécom Paris)
Alain Durmus (ENS Paris Saclay)
Pierre E Jacob (ESSEC Business School)
Roland Badeau (Télécom Paris)
Umut Simsekli (Inria Paris / ENS)
More from the Same Authors
-
2021 Spotlight: Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms »
Alexander Camuto · George Deligiannidis · Murat Erdogdu · Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2022 Affinity Workshop: Women in Machine Learning - Virtual »
Mariam Arab · Konstantina Palla · Sergul Aydore · Gloria Namanya · Beliz Gunel · Kimia Nadjahi · Soomin Aga Lee -
2022 Affinity Workshop: Women in Machine Learning »
Mariam Arab · Konstantina Palla · Sergul Aydore · Gloria Namanya · Beliz Gunel · Kimia Nadjahi · Soomin Aga Lee -
2021 Poster: Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks »
Melih Barsbey · Milad Sefidgaran · Murat Erdogdu · Gaël Richard · Umut Simsekli -
2021 Poster: Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks »
Tolga Birdal · Aaron Lou · Leonidas Guibas · Umut Simsekli -
2021 Poster: NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform »
Achille Thin · Yazid Janati El Idrissi · Sylvain Le Corff · Charles Ollion · Eric Moulines · Arnaud Doucet · Alain Durmus · Christian X Robert -
2021 Poster: Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance »
Hongjian Wang · Mert Gurbuzbalaban · Lingjiong Zhu · Umut Simsekli · Murat Erdogdu -
2021 Poster: Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms »
Alexander Camuto · George Deligiannidis · Murat Erdogdu · Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2021 Poster: Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize »
Alain Durmus · Eric Moulines · Alexey Naumov · Sergey Samsonov · Kevin Scaman · Hoi-To Wai -
2020 Poster: Statistical and Topological Properties of Sliced Probability Divergences »
Kimia Nadjahi · Alain Durmus · Lénaïc Chizat · Soheil Kolouri · Shahin Shahrampour · Umut Simsekli -
2020 Spotlight: Statistical and Topological Properties of Sliced Probability Divergences »
Kimia Nadjahi · Alain Durmus · Lénaïc Chizat · Soheil Kolouri · Shahin Shahrampour · Umut Simsekli -
2019 Poster: Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance »
Kimia Nadjahi · Alain Durmus · Umut Simsekli · Roland Badeau -
2019 Spotlight: Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance »
Kimia Nadjahi · Alain Durmus · Umut Simsekli · Roland Badeau -
2019 Poster: Estimating Convergence of Markov chains with L-Lag Couplings »
Niloy Biswas · Pierre E Jacob · Paul Vanetti -
2019 Poster: Generalized Sliced Wasserstein Distances »
Soheil Kolouri · Kimia Nadjahi · Umut Simsekli · Roland Badeau · Gustavo Rohde -
2018 Poster: The promises and pitfalls of Stochastic Gradient Langevin Dynamics »
Nicolas Brosse · Alain Durmus · Eric Moulines -
2017 : Approximate Bayesian computation with the Wasserstein distance »
Pierre E Jacob -
2017 Poster: Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding »
Mainak Jas · Tom Dupré la Tour · Umut Simsekli · Alexandre Gramfort