Timezone: »
Many problems in machine learning can be formulated as optimizing a convex functional over a vector space of measures. This paper studies the convergence of the mirror descent algorithm in this infinite-dimensional setting. Defining Bregman divergences through directional derivatives, we derive the convergence of the scheme for relatively smooth and convex pairs of functionals. Such assumptions allow to handle non-smooth functionals such as the Kullback--Leibler (KL) divergence. Applying our result to joint distributions and KL, we show that Sinkhorn's primal iterations for entropic optimal transport in the continuous setting correspond to a mirror descent, and we obtain a new proof of its (sub)linear convergence. We also show that Expectation Maximization (EM) can always formally be written as a mirror descent. When optimizing only on the latent distribution while fixing the mixtures parameters -- which corresponds to the Richardson--Lucy deconvolution scheme in signal processing -- we derive sublinear rates of convergence.
Author Information
Pierre-Cyril Aubin-Frankowski (INRIA Paris)
Anna Korba (CREST/ENSAE)
Flavien Léger (INRIA)
More from the Same Authors
-
2020 Poster: A Non-Asymptotic Analysis for Stein Variational Gradient Descent »
Anna Korba · Adil Salim · Michael Arbel · Giulia Luise · Arthur Gretton -
2020 Poster: The Wasserstein Proximal Gradient Algorithm »
Adil Salim · Anna Korba · Giulia Luise