This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints

Marc Finzi, Alex Wang, Andrew Wilson

Spotlight presentation: Orals & Spotlights Track 28: Deep Learning
on 2020-12-10T07:00:00-08:00 - 2020-12-10T07:10:00-08:00
Poster Session 6 (more posters)
on 2020-12-10T09:00:00-08:00 - 2020-12-10T11:00:00-08:00
Abstract: Reasoning about the physical world requires models that are endowed with the right inductive biases to learn the underlying dynamics. Recent works improve generalization for predicting trajectories by learning the Hamiltonian or Lagrangian of a system rather than the differential equations directly. While these methods encode the constraints of the systems using generalized coordinates, we show that embedding the system into Cartesian coordinates and enforcing the constraints explicitly with Lagrange multipliers dramatically simplifies the learning problem. We introduce a series of challenging chaotic and extended-body systems, including systems with $N$-pendulums, spring coupling, magnetic fields, rigid rotors, and gyroscopes, to push the limits of current approaches. Our experiments show that Cartesian coordinates with explicit constraints lead to a 100x improvement in accuracy and data efficiency.

Preview Video and Chat

To see video, interact with the author and ask questions please use registration and login.