Timezone: »
A recently proposed class of models attempts to learn latent dynamics from high-dimensional observations, like images, using priors informed by Hamiltonian mechanics. While these models have important potential applications in areas like robotics or autonomous driving, there is currently no good way to evaluate their performance: existing methods primarily rely on image reconstruction quality, which does not always reflect the quality of the learnt latent dynamics. In this work, we empirically highlight the problems with the existing measures and develop a set of new measures, including a binary indicator of whether the underlying Hamiltonian dynamics have been faithfully captured, which we call Symplecticity Metric or SyMetric. Our measures take advantage of the known properties of Hamiltonian dynamics and are more discriminative of the model's ability to capture the underlying dynamics than reconstruction error. Using SyMetric, we identify a set of architectural choices that significantly improve the performance of a previously proposed model for inferring latent dynamics from pixels, the Hamiltonian Generative Network (HGN). Unlike the original HGN, the new SyMetric is able to discover an interpretable phase space with physically meaningful latents on some datasets. Furthermore, it is stable for significantly longer rollouts on a diverse range of 13 datasets, producing rollouts of essentially infinite length both forward and backwards in time with no degradation in quality on a subset of the datasets.
Author Information
Irina Higgins (DeepMind)
Peter Wirnsberger (DeepMind)
Andrew Jaegle (DeepMind)
Aleksandar Botev (DeepMind)
More from the Same Authors
-
2021 : Which priors matter? Benchmarking models for learning latent dynamics »
Aleksandar Botev · Andrew Jaegle · Peter Wirnsberger · Daniel Hennes · Irina Higgins -
2022 : Solving Math Word Problems with Process-based and Outcome-based Feedback »
Jonathan Uesato · Nate Kushman · Ramana Kumar · H. Francis Song · Noah Siegel · Lisa Wang · Antonia Creswell · Geoffrey Irving · Irina Higgins -
2022 : Solving Math Word Problems with Process-based and Outcome-based Feedback »
Jonathan Uesato · Nate Kushman · Ramana Kumar · H. Francis Song · Noah Siegel · Lisa Wang · Antonia Creswell · Geoffrey Irving · Irina Higgins -
2022 : Panel Discussion I: Geometric and topological principles for representation learning in ML »
Irina Higgins · Taco Cohen · Erik Bekkers · Nina Miolane · Rose Yu -
2022 : Symmetry-Based Representations for Artificial and Biological Intelligence »
Irina Higgins -
2022 Workshop: Information-Theoretic Principles in Cognitive Systems »
Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman -
2021 : Invited Talk #3 - Disentanglement for Controllable Image Generation (Irina Higgins) »
Irina Higgins -
2021 Tutorial: Pay Attention to What You Need: Do Structural Priors Still Matter in the Age of Billion Parameter Models? »
Irina Higgins · Antonia Creswell · Sébastien Racanière -
2021 : Why do we Need Structure and Where does it Come From? »
Irina Higgins -
2020 : Invited Talk: Irina Higgins »
Irina Higgins -
2020 : Panel Discussion »
Jessica Hamrick · Klaus Greff · Michelle A. Lee · Irina Higgins · Josh Tenenbaum -
2020 Poster: Disentangling by Subspace Diffusion »
David Pfau · Irina Higgins · Alex Botev · Sébastien Racanière -
2019 : Coffee break, posters, and 1-on-1 discussions »
Yangyi Lu · Daniel Chen · Hongseok Namkoong · Marie Charpignon · Maja Rudolph · Amanda Coston · Julius von Kügelgen · Niranjani Prasad · Paramveer Dhillon · Yunzong Xu · Yixin Wang · Alexander Markham · David Rohde · Rahul Singh · Zichen Zhang · Negar Hassanpour · Ankit Sharma · Ciarán Lee · Jean Pouget-Abadie · Jesse Krijthe · Divyat Mahajan · Nan Rosemary Ke · Peter Wirnsberger · Vira Semenova · Dmytro Mykhaylov · Dennis Shen · Kenta Takatsu · Liyang Sun · Jeremy Yang · Alexander Franks · Pak Kan Wong · Tauhid Zaman · Shira Mitchell · min kyoung kang · Qi Yang -
2019 : Panel Discussion: What sorts of cognitive or biological (architectural) inductive biases will be crucial for developing effective artificial intelligence? »
Irina Higgins · Talia Konkle · Matthias Bethge · Nikolaus Kriegeskorte -
2019 : What is disentangling and does intelligence need it? »
Irina Higgins -
2018 : Invited Talk 3 »
Irina Higgins -
2018 Poster: Online Structured Laplace Approximations for Overcoming Catastrophic Forgetting »
Hippolyt Ritter · Aleksandar Botev · David Barber -
2018 Poster: Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies »
Alessandro Achille · Tom Eccles · Loic Matthey · Chris Burgess · Nicholas Watters · Alexander Lerchner · Irina Higgins -
2018 Spotlight: Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies »
Alessandro Achille · Tom Eccles · Loic Matthey · Chris Burgess · Nicholas Watters · Alexander Lerchner · Irina Higgins -
2017 : Poster session »
Xun Zheng · Tim G. J. Rudner · Christopher Tegho · Patrick McClure · Yunhao Tang · ASHWIN D'CRUZ · Juan Camilo Gamboa Higuera · Chandra Sekhar Seelamantula · Jhosimar Arias Figueroa · Andrew Berlin · Maxime Voisin · Alexander Amini · Thang Long Doan · Hengyuan Hu · Aleksandar Botev · Niko Suenderhauf · CHI ZHANG · John Lambert -
2017 : Irina Higgins »
Irina Higgins