Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Symbiosis of Deep Learning and Differential Equations -- III

Towards Optimal Network Depths: Control-Inspired Acceleration of Training and Inference in Neural ODEs

Keyan Miao · Konstantinos Gatsis

Keywords: [ Neural ODEs ] [ network depth ] [ Lyapunov ] [ minimum-time control ] [ temporal optimization ] [ convergence speed ] [ optimal control ]


Abstract:

Neural Ordinary Differential Equations (ODEs) offer potential for learning continuous dynamics, but their slow training and inference limit broader use. This paper proposes spatial and temporal optimization inspired by control theory. It seeks an optimal network depth to accelerate both training and inference while maintaining performance. Two approaches are presented: one treats training as a single-stage minimum-time optimal control problem, adjusting terminal time, and the other combines pre-training with Lyapunov method, followed by safe terminal time updates in a secondary stage. Experiments confirm the effectiveness of addressing Neural ODEs' speed limitations.

Chat is not available.