Timezone: »
The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The low-rank optimal transport (LOT) approach advocated in \cite{scetbon2021lowrank} holds several promises in that regard, and was shown to complement more established entropic regularization approaches, being able to insert itself in more complex pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to those that have a low-nonnegative rank, yielding linear time algorithms in cases of interest. However, these promises can only be fulfilled if the LOT approach is seen as a legitimate contender to entropic regularization when compared on properties of interest, where the scorecard typically includes theoretical properties (statistical complexity and relation to other methods) or practical aspects (debiasing, hyperparameter tuning, initialization). We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT.
Author Information
Meyer Scetbon (CREST-ENSAE)
Marco Cuturi (Apple)
More from the Same Authors
-
2021 : Linear-Time Gromov Wasserstein Distances using Low Rank Couplings and Costs »
Meyer Scetbon · Gabriel Peyré · Marco Cuturi -
2021 : Linear-Time Gromov Wasserstein Distances using Low Rank Couplings and Costs »
Meyer Scetbon · Gabriel Peyré · Marco Cuturi -
2023 Workshop: Optimal Transport and Machine Learning »
Anna Korba · Aram-Alexandre Pooladian · Charlotte Bunne · David Alvarez-Melis · Marco Cuturi · Ziv Goldfeld -
2023 Poster: Unbalanced Low-rank Optimal Transport Solvers »
Meyer Scetbon · Michal Klein · Giovanni Palla · Marco Cuturi -
2022 Poster: Supervised Training of Conditional Monge Maps »
Charlotte Bunne · Andreas Krause · Marco Cuturi -
2022 Poster: Efficient and Modular Implicit Differentiation »
Mathieu Blondel · Quentin Berthet · Marco Cuturi · Roy Frostig · Stephan Hoyer · Felipe Llinares-Lopez · Fabian Pedregosa · Jean-Philippe Vert -
2020 Poster: Linear Time Sinkhorn Divergences using Positive Features »
Meyer Scetbon · Marco Cuturi -
2019 Poster: Comparing distributions: $\ell_1$ geometry improves kernel two-sample testing »
Meyer Scetbon · Gael Varoquaux -
2019 Spotlight: Comparing distributions: $\ell_1$ geometry improves kernel two-sample testing »
Meyer Scetbon · Gael Varoquaux