Skip to yearly menu bar Skip to main content


Poster

Diffeomorphic interpolation for efficient persistence-based topological optimization

Mathieu Carrière · Marc Theveneau · Théo Lacombe

East Exhibit Hall A-C #3504
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Topological Data Analysis (TDA) provides a pipeline to extract quantitative and powerful topological descriptors from structured objects. This enables the definition of topological loss functions, which assert to which extent a given object exhibits some topological properties. One can then use these losses to perform topological optimization via gradient descent routines. While theoretically sounded, topological optimization faces an important challenge: gradients tend to be extremely sparse, in the sense that the loss function typically depends (locally) on only very few coordinates of the input object, yielding dramatically slow optimization schemes in practice. In this work, focusing on the central case of topological optimization for point clouds, we propose to overcome this limitation using diffeomorphic interpolation, turning sparse gradients into smooth vector fields defined on the whole space. In particular, this approach combines efficiently with subsampling techniques routinely used in TDA, as the diffeomorphism derived from the gradient computed on the subsample can be used to update the coordinates of the full and possibly large input object. We then illustrate the usefulness of our approach on black-box autoencoder (AE) regularization, where we aim at applying some topological priors on the latent spaces associated to fixed, black-box AE models without modifying their (unknown) architectures and parameters. We empirically show that, while vanilla topological optimization has to be re-run every time that new data comes out of the black-box models, learning a diffeomorphic flow can be done once and then re-applied to new data in linear time. Moreover, reverting the flow allows us to generate data by sampling the topologically-optimized latent space directly, allowing for better interpretability of the model.

Live content is unavailable. Log in and register to view live content