Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Differential Geometry meets Deep Learning (DiffGeo4DL)

Universal Approximation Property of Neural Ordinary Differential Equations

Takeshi Teshima · Koichi Tojo · Masahiro Ikeda · Isao Ishikawa · Kenta Oono


Abstract: Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator. Recently, the representation power of NODEs has been partly uncovered: they form an $L^p$-universal approximator for continuous maps under certain conditions. However, the $L^p$-universality may fail to guarantee an approximation for the entire input domain as it may still hold even if the approximator largely differs from the target function on a small region of the input space. To further uncover the potential of NODEs, we show their stronger approximation property, namely the $\sup$-universality for approximating a large class of diffeomorphisms. It is shown by leveraging a structure theorem of the diffeomorphism group, and the result complements the existing literature by establishing a fairly large set of mappings that NODEs can approximate with a stronger guarantee.

Chat is not available.