Timezone: »
Software packages like TensorFlow and PyTorch are designed to support linear algebra operations, and their speed and usability determine their success. However, by prioritising speed, they often neglect memory requirements. As a consequence, the implementations of memory-intensive algorithms that are convenient in terms of software design can often not be run for large problems due to memory overflows. Memory-efficient solutions require complex programming approaches with significant logic outside the computational framework. This impairs the adoption and use of such algorithms. To address this, we developed an XLA compiler extension that adjusts the computational data-flow representation of an algorithm according to a user-specified memory limit. We show that k-nearest neighbour, sparse Gaussian process regression methods and Transformers can be run on a single device at a much larger scale, where standard implementations would have failed. Our approach leads to better use of hardware resources. We believe that further focus on removing memory constraints at a compiler level will widen the range of machine learning methods that can be developed in the future.
Author Information
Artem Artemev (Imperial College London)
Yuze An
Tilman Roeder
Mark van der Wilk (Imperial College London)
More from the Same Authors
-
2022 : Actually Sparse Variational Gaussian Processes »
Jake Cunningham · So Takao · Mark van der Wilk · Marc Deisenroth -
2022 : Recommendations for Baselines and Benchmarking Approximate Gaussian Processes »
Sebastian Ober · David Burt · Artem Artemev · Mark van der Wilk -
2022 : Sparse Convolutions on Lie Groups »
Tycho van der Ouderaa · Mark van der Wilk -
2022 : Causal Discovery using Marginal Likelihood »
Anish Dhir · Mark van der Wilk -
2022 Poster: Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations »
Alexander Immer · Tycho van der Ouderaa · Gunnar Rätsch · Vincent Fortuin · Mark van der Wilk -
2022 Poster: SnAKe: Bayesian Optimization with Pathwise Exploration »
Jose Pablo Folch · Shiqiang Zhang · Robert Lee · Behrang Shafei · David Walz · Calvin Tsay · Mark van der Wilk · Ruth Misener -
2022 Poster: Relaxing Equivariance Constraints with Non-stationary Continuous Filters »
Tycho van der Ouderaa · David W. Romero · Mark van der Wilk -
2021 Poster: Scalable Thompson Sampling using Sparse Gaussian Process Models »
Sattar Vakili · Henry Moss · Artem Artemev · Vincent Dutordoir · Victor Picheny -
2020 Poster: A Bayesian Perspective on Training Speed and Model Selection »
Clare Lyle · Lisa Schut · Robin Ru · Yarin Gal · Mark van der Wilk -
2020 Poster: Stochastic Segmentation Networks: Modelling Spatially Correlated Aleatoric Uncertainty »
Miguel Monteiro · Loic Le Folgoc · Daniel Coelho de Castro · Nick Pawlowski · Bernardo Marques · Konstantinos Kamnitsas · Mark van der Wilk · Ben Glocker