Fast geometric learning with symbolic matrices
Jean Feydy, Joan Glaunès, Benjamin Charlier, Michael Bronstein
Spotlight presentation: Orals & Spotlights Track 17: Kernel Methods/Optimization
on 2020-12-09T07:50:00-08:00 - 2020-12-09T08:00:00-08:00
on 2020-12-09T07:50:00-08:00 - 2020-12-09T08:00:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Geometric methods rely on tensors that can be encoded using a symbolic formula and data arrays, such as kernel and distance matrices. We present an extension for standard machine learning frameworks that provides comprehensive support for this abstraction on CPUs and GPUs: our toolbox combines a versatile, transparent user interface with fast runtimes and low memory usage. Unlike general purpose acceleration frameworks such as XLA, our library turns generic Python code into binaries whose performances are competitive with state-of-the-art geometric libraries - such as FAISS for nearest neighbor search - with the added benefit of flexibility. We perform an extensive evaluation on a broad class of problems: Gaussian modelling, K-nearest neighbors search, geometric deep learning, non-Euclidean embeddings and optimal transport theory. In practice, for geometric problems that involve 1k to 1M samples in dimension 1 to 100, our library speeds up baseline GPU implementations by up to two orders of magnitude.