( events)   Timezone: »  
Tue Dec 14 08:00 AM -- 04:00 PM (PST)
Physical Reasoning and Inductive Biases for the Real World
Krishna Jatavallabhula · Rika Antonova · Kevin Smith · Hsiao-Yu Tung · Florian Shkurti · Christin Jeannette Bohg · Josh Tenenbaum

Much progress has been made on end-to-end learning for physical understanding and reasoning. If successful, understanding and reasoning about the physical world promises far-reaching applications in robotics, machine vision, and the physical sciences. Despite this recent progress, our best artificial systems pale in comparison to the flexibility and generalization of human physical reasoning.

Neural information processing systems have shown promising empirical results on synthetic datasets, yet do not transfer well when deployed in novel scenarios (including the physical world). If physical understanding and reasoning techniques are to play a broader role in the physical world, they must be able to function across a wide variety of scenarios, including ones that might lie outside the training distribution. How can we design systems that satisfy these criteria?

Our workshop aims to investigate this broad question by bringing together experts from machine learning, the physical sciences, cognitive and developmental psychology, and robotics to investigate how these techniques may one day be employed in the real world. In particular, we aim to investigate the following questions: 1. What forms of inductive biases best enable the development of physical understanding techniques that are applicable to real-world problems? 2. How do we ensure that the outputs of a physical reasoning module are reasonable and physically plausible? 3. Is interpretability a necessity for physical understanding and reasoning techniques to be suitable to real-world problems?

Unlike end-to-end neural architectures that distribute bias across a large set of parameters, modern structured physical reasoning modules (differentiable physics, relational learning, probabilistic programming) maintain modularity and physical interpretability. We will discuss how these inductive biases might aid in generalization and interpretability, and how these techniques impact real-world problems.

Introductory remarks (Live talk)
Tomer Ullman (Live talk)
Nils Thuerey (Live talk)
Karen Liu (Live talk)
Playful Interactions for Representation Learning (Oral)
Efficient and Interpretable Robot Manipulation with Graph Neural Networks (Oral)
Vision-based system identification and 3D keypoint discovery using dynamics constraints (Oral)
3D Neural Scene Representations for Visuomotor Control (Spotlight)
Learning Graph Search Heuristics (Spotlight)
Efficient Partial Simulation Quantitatively Explains Deviations from Optimal Physical Predictions (Spotlight)
TorchDyn: Implicit Models and Neural Numerical Methods in PyTorch (Spotlight)
3D-OES: Viewpoint-Invariant Object-FactorizedEnvironment Simulators (Spotlight)
DLO@Scale: A Large-scale Meta Dataset for Learning Non-rigid Object Pushing Dynamics (Spotlight)
AVoE: A Synthetic 3D Dataset on Understanding Violation of Expectation for Artificial Cognition (Spotlight)
Physics-guided Learning-based Adaptive Control on the SE(3) Manifold (Spotlight)
Neural NID Rules (Spotlight)
Kelsey Allen (Live talk)
Kyle Cranmer (Live talk)
Shuran Song (Live talk)
Industry Panel: Kenneth Tran (Koidra), Hiro Ono (NASA JPL), Aleksandra Faust (Google Brain), Michael Roberts (COVID-19 AIX-COVNET University of Cambridge) (Discussion Panel)
Research Panel (Discussion Panel)
Social - GatherTown (GatherTown Meeting)