Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop: Machine Learning and the Physical Sciences

KeyCLD: Learning Constrained Lagrangian Dynamics in Keypoint Coordinates from Images

Rembert Daems · · Francis Wyffels · Guillaume Crevecoeur


Abstract:

We present KeyCLD, a framework to learn Lagrangian dynamics from images. Learned keypoints represent semantic landmarks in images and can directly represent state dynamics. We show that interpreting this state as Cartesian coordinates, coupled with explicit holonomic constraints,allows expressing the dynamics with a constrained Lagrangian. KeyCLD is trained unsupervised end-to-end on sequences of images. Our method explicitly models the mass matrix, potential energy and the input matrix,thus allowing energy based control. We demonstrate learning of Lagrangian dynamics from images on the dm_control pendulum, cartpole and acrobot environments.KeyCLD can be learned on these systems, whether they are unactuated, underactuated or fully actuated.Trained models are able to produce long-term video predictions, showing that the dynamics are accurately learned.We compare with Lag-VAE, Lag-caVAE and HGN, and investigate the benefit of the Lagrangian prior and the constraint function.KeyCLD achieves the highest valid prediction time on all benchmarks.Additionally, a very straightforward energy shaping controller is successfully applied on the fully actuated systems.

Chat is not available.