Timezone: »
Wasserstein gradient flows are continuous time dynamics that define curves of steepest descent to minimize an objective function over the space of probability measures (i.e., the Wasserstein space). This objective is typically a divergence w.r.t. a fixed target distribution. In recent years, these continuous time dynamics have been used to study the convergence of machine learning algorithms aiming at approximating a probability distribution. However, the discrete-time behavior of these algorithms might differ from the continuous time dynamics. Besides, although discretized gradient flows have been proposed in the literature, little is known about their minimization power. In this work, we propose a Forward Backward (FB) discretization scheme that can tackle the case where the objective function is the sum of a smooth and a nonsmooth geodesically convex terms. Using techniques from convex optimization and optimal transport, we analyze the FB scheme as a minimization algorithm on the Wasserstein space. More precisely, we show under mild assumptions that the FB scheme has convergence guarantees similar to the proximal gradient algorithm in Euclidean spaces (resp. similar to the associated Wasserstein gradient flow).
Author Information
Adil Salim (KAUST)
Anna Korba (UCL)
Giulia Luise (University College London)
More from the Same Authors
-
2022 : Meta Optimal Transport »
Brandon Amos · Samuel Cohen · Giulia Luise · Ievgen Redko -
2023 Workshop: Optimal Transport and Machine Learning »
Anna Korba · Aram-Alexandre Pooladian · Charlotte Bunne · David Alvarez-Melis · Marco Cuturi · Ziv Goldfeld -
2022 Poster: Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and EM »
Pierre-Cyril Aubin-Frankowski · Anna Korba · Flavien Léger -
2021 : The NeurIPS 2021 BEETL Competition: Benchmarks for EEG Transfer Learning + Q&A »
Xiaoxi Wei · Vinay Jayaram · Sylvain Chevallier · Giulia Luise · Camille Jeunet · Moritz Grosse-Wentrup · Alexandre Gramfort · Aldo A Faisal -
2020 : Poster Session 1 (gather.town) »
Laurent Condat · Tiffany Vlaar · Ohad Shamir · Mohammadi Zaki · Zhize Li · Guan-Horng Liu · Samuel Horváth · Mher Safaryan · Yoni Choukroun · Kumar Shridhar · Nabil Kahale · Jikai Jin · Pratik Kumar Jawanpuria · Gaurav Kumar Yadav · Kazuki Koyama · Junyoung Kim · Xiao Li · Saugata Purkayastha · Adil Salim · Dighanchal Banerjee · Peter Richtarik · Lakshman Mahto · Tian Ye · Bamdev Mishra · Huikang Liu · Jiajie Zhu -
2020 Poster: A Non-Asymptotic Analysis for Stein Variational Gradient Descent »
Anna Korba · Adil Salim · Michael Arbel · Giulia Luise · Arthur Gretton -
2020 Poster: Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm »
Adil Salim · Peter Richtarik -
2020 Poster: Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning »
Luca Oneto · Michele Donini · Giulia Luise · Carlo Ciliberto · Andreas Maurer · Massimiliano Pontil -
2020 Poster: Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization »
Dmitry Kovalev · Adil Salim · Peter Richtarik -
2019 Poster: Maximum Mean Discrepancy Gradient Flow »
Michael Arbel · Anna Korba · Adil Salim · Arthur Gretton -
2019 Poster: Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates »
Adil Salim · Dmitry Kovalev · Peter Richtarik -
2019 Poster: Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm »
Giulia Luise · Saverio Salzo · Massimiliano Pontil · Carlo Ciliberto -
2019 Spotlight: Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates »
Adil Salim · Dmitry Kovalev · Peter Richtarik -
2019 Spotlight: Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm »
Giulia Luise · Saverio Salzo · Massimiliano Pontil · Carlo Ciliberto -
2018 Poster: Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance »
Giulia Luise · Alessandro Rudi · Massimiliano Pontil · Carlo Ciliberto