Timezone: »

TorchDyn: Implicit Models and Neural Numerical Methods in PyTorch
Michael Poli · Stefano Massaroli · Atsushi Yamashita · Hajime Asama · Jinkyoo Park · Stefano Ermon

Tue Dec 14 11:06 AM -- 11:08 AM (PST) @

Computation in traditional deep learning models is directly determined by the explicit linking of select primitives e.g. layers or blocks arranged in a computational graph. Implicit neural models follow instead a declarative approach; a desiderata is encoded into constraints and a numerical method is applied to solve the resulting optimization problem as part of the inference pass. Existing open-source frameworks focus on explicit models and do not offer implementations of the numerical routines required to study and benchmark implicit models. We introduce TorchDyn, a PyTorch library fully tailored to implicit learning. TorchDyn primitives are categorized into numerical and sensitivity methods and model classes, with pre-existing implementations that can be combined and repurposed to obtain complex compositional implicit architectures. TorchDyn further offers a collection step-by-step tutorials and benchmarks designed to accelerate research and improve the robustness of experimental evaluations for implicit models.

Author Information

Michael Poli (Stanford University)

My work spans topics in deep learning, dynamical systems, variational inference and numerical methods. I am broadly interested in ensuring the successes achieved by deep learning methods in computer vision and natural language are extended to other engineering domains.

Stefano Massaroli (The University of Tokyo)
Atsushi Yamashita (The University of Tokyo)
Hajime Asama (The University of Tokyo)
Jinkyoo Park (KAIST)
Stefano Ermon (Stanford)

More from the Same Authors