Timezone: »
Learning dynamics is at the heart of many important applications of machine learning (ML), such as robotics and autonomous driving. In these settings, ML algorithms typically need to reason about a physical system using high dimensional observations, such as images, without access to the underlying state. Recently, several methods have proposed to integrate priors from classical mechanics into ML models to address the challenge of physical reasoning from images. In this work, we take a sober look at the current capabilities of these models. To this end, we introduce a suite consisting of 17 datasets with visual observations based on physical systems exhibiting a wide range of dynamics. We conduct a thorough and detailed comparison of the major classes of physically inspired methods alongside several strong baselines. While models that incorporate physical priors can often learn latent spaces with desirable properties, our results demonstrate that these methods fail to significantly improve upon standard techniques. Nonetheless, we find that the use of continuous and time-reversible dynamics benefits models of all classes.
Author Information
Aleksandar Botev (DeepMind)
Andrew Jaegle (DeepMind)
Peter Wirnsberger (DeepMind)
Daniel Hennes (DeepMind)
Irina Higgins (DeepMind)
More from the Same Authors
-
2022 : Solving Math Word Problems with Process-based and Outcome-based Feedback »
Jonathan Uesato · Nate Kushman · Ramana Kumar · H. Francis Song · Noah Siegel · Lisa Wang · Antonia Creswell · Geoffrey Irving · Irina Higgins -
2022 : Solving Math Word Problems with Process-based and Outcome-based Feedback »
Jonathan Uesato · Nate Kushman · Ramana Kumar · H. Francis Song · Noah Siegel · Lisa Wang · Antonia Creswell · Geoffrey Irving · Irina Higgins -
2022 : Panel Discussion I: Geometric and topological principles for representation learning in ML »
Irina Higgins · Taco Cohen · Erik Bekkers · Nina Miolane · Rose Yu -
2022 : Symmetry-Based Representations for Artificial and Biological Intelligence »
Irina Higgins -
2022 Workshop: Information-Theoretic Principles in Cognitive Systems »
Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman -
2021 : Invited Talk #3 - Disentanglement for Controllable Image Generation (Irina Higgins) »
Irina Higgins -
2021 Poster: SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred from Vision »
Irina Higgins · Peter Wirnsberger · Andrew Jaegle · Aleksandar Botev -
2021 Tutorial: Pay Attention to What You Need: Do Structural Priors Still Matter in the Age of Billion Parameter Models? »
Irina Higgins · Antonia Creswell · Sébastien Racanière -
2021 : Why do we Need Structure and Where does it Come From? »
Irina Higgins -
2020 : Invited Talk: Irina Higgins »
Irina Higgins -
2020 : Panel Discussion »
Jessica Hamrick · Klaus Greff · Michelle A. Lee · Irina Higgins · Josh Tenenbaum -
2020 Poster: Disentangling by Subspace Diffusion »
David Pfau · Irina Higgins · Alex Botev · Sébastien Racanière -
2019 : Coffee break, posters, and 1-on-1 discussions »
Yangyi Lu · Daniel Chen · Hongseok Namkoong · Marie Charpignon · Maja Rudolph · Amanda Coston · Julius von Kügelgen · Niranjani Prasad · Paramveer Dhillon · Yunzong Xu · Yixin Wang · Alexander Markham · David Rohde · Rahul Singh · Zichen Zhang · Negar Hassanpour · Ankit Sharma · Ciarán Lee · Jean Pouget-Abadie · Jesse Krijthe · Divyat Mahajan · Nan Rosemary Ke · Peter Wirnsberger · Vira Semenova · Dmytro Mykhaylov · Dennis Shen · Kenta Takatsu · Liyang Sun · Jeremy Yang · Alexander Franks · Pak Kan Wong · Tauhid Zaman · Shira Mitchell · min kyoung kang · Qi Yang -
2019 : Panel Discussion: What sorts of cognitive or biological (architectural) inductive biases will be crucial for developing effective artificial intelligence? »
Irina Higgins · Talia Konkle · Matthias Bethge · Nikolaus Kriegeskorte -
2019 : What is disentangling and does intelligence need it? »
Irina Higgins -
2018 : Invited Talk 3 »
Irina Higgins -
2018 Poster: Online Structured Laplace Approximations for Overcoming Catastrophic Forgetting »
Hippolyt Ritter · Aleksandar Botev · David Barber -
2018 Poster: Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies »
Alessandro Achille · Tom Eccles · Loic Matthey · Chris Burgess · Nicholas Watters · Alexander Lerchner · Irina Higgins -
2018 Spotlight: Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies »
Alessandro Achille · Tom Eccles · Loic Matthey · Chris Burgess · Nicholas Watters · Alexander Lerchner · Irina Higgins -
2017 : Poster session »
Xun Zheng · Tim G. J. Rudner · Christopher Tegho · Patrick McClure · Yunhao Tang · ASHWIN D'CRUZ · Juan Camilo Gamboa Higuera · Chandra Sekhar Seelamantula · Jhosimar Arias Figueroa · Andrew Berlin · Maxime Voisin · Alexander Amini · Thang Long Doan · Hengyuan Hu · Aleksandar Botev · Niko Suenderhauf · CHI ZHANG · John Lambert -
2017 : Irina Higgins »
Irina Higgins