Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization

Rematerialization Algorithms for Memory-efficient Learning

Lionel Eyraud-Dubois

[ ]
Sat 16 Dec 7:30 a.m. PST — 8 a.m. PST

Abstract:

Abstract: The training phase of Deep Neural Networks is often a very memory-intensive procedure, where large amounts of intermediate data have to be kept in memory during one iteration. One possible approach to reduce memory usage is rematerialization, aka gradient checkpointing, where some intermediate data are recomputed when needed rather than kept in memory. This provides a tradeoff between memory usage and recomputation time. In this talk I will present several approaches for the optimization problem, where one wants to minimize the recomputation time given a fixed memory budget. The corresponding algorithms have been implemented in easy-to-use libraries for the PyTorch framework, which can significantly reduce memory usage with reasonable overhead.

Speaker's Bio: Lionel Eyraud-Dubois received his PhD degree in computer science from the Université de Grenoble. He is currently a full-time researcher with Inria Bordeaux Sud-Ouest in the Topal team. His main research interests encompass combinatorial optimization and operation research techniques for scheduling and resource allocation problems in high performance computer systems, including for optimizing the training and inference processes of Deep Neural Networks.

Chat is not available.