Timezone: »

 
Workshop
Physical Reasoning and Inductive Biases for the Real World
Krishna Murthy Jatavallabhula · Rika Antonova · Kevin Smith · Hsiao-Yu Tung · Florian Shkurti · Jeannette Bohg · Josh Tenenbaum

Tue Dec 14 08:00 AM -- 04:00 PM (PST) @
Event URL: https://physical-reasoning.github.io/ »

Much progress has been made on end-to-end learning for physical understanding and reasoning. If successful, understanding and reasoning about the physical world promises far-reaching applications in robotics, machine vision, and the physical sciences. Despite this recent progress, our best artificial systems pale in comparison to the flexibility and generalization of human physical reasoning.

Neural information processing systems have shown promising empirical results on synthetic datasets, yet do not transfer well when deployed in novel scenarios (including the physical world). If physical understanding and reasoning techniques are to play a broader role in the physical world, they must be able to function across a wide variety of scenarios, including ones that might lie outside the training distribution. How can we design systems that satisfy these criteria?

Our workshop aims to investigate this broad question by bringing together experts from machine learning, the physical sciences, cognitive and developmental psychology, and robotics to investigate how these techniques may one day be employed in the real world. In particular, we aim to investigate the following questions: 1. What forms of inductive biases best enable the development of physical understanding techniques that are applicable to real-world problems? 2. How do we ensure that the outputs of a physical reasoning module are reasonable and physically plausible? 3. Is interpretability a necessity for physical understanding and reasoning techniques to be suitable to real-world problems?

Unlike end-to-end neural architectures that distribute bias across a large set of parameters, modern structured physical reasoning modules (differentiable physics, relational learning, probabilistic programming) maintain modularity and physical interpretability. We will discuss how these inductive biases might aid in generalization and interpretability, and how these techniques impact real-world problems.

Author Information

Krishna Murthy Jatavallabhula (Mila, Universite de Montreal)
Rika Antonova (Stanford University)

Rika is a postdoc at [Stanford IPRL](http://iprl.stanford.edu/#people) lab, part of NSF/CRA [CI Fellowship](https://cifellows2020.org/2020-class/) program, doing research on active learning of [transferable priors, kernels, and latent representations for robotics](https://cccblog.org/2021/05/26/active-learning-of-transferable-priors-kernels-and-latent-representations-for-robotics/). Rika completed her PhD work on [data-efficient simulation-to-reality transfer](http://kth.diva-portal.org/smash/record.jsf?pid=diva2:1476620) at the Robotics, Perception and Learning lab in KTH, Stockholm, in the group headed by Danica Kragic. Before that, Rika was a Masters student at the Robotics Institute at Carnegie Mellon University, developing Bayesian optimization approaches for learning control parameters for bipedal locomotion (with Akshara Rai and Chris Atkeson). Rika's CMU MS advisor was Emma Brunskill and in her group Rika worked on developing reinforcement learning algorithms for education. A few years earlier, Rika was a software engineer at Google, first in the Search Personalization group and then in the Character Recognition team (developing open-source OCR engine Tesseract).

Kevin Smith (MIT)
Hsiao-Yu Tung (Carnegie Mellon University)
Florian Shkurti (University of Toronto)
Jeannette Bohg (Stanford University)
Josh Tenenbaum (MIT)

Josh Tenenbaum is an Associate Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 1999, and was an Assistant Professor at Stanford University from 1999 to 2002. He studies learning and inference in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. He focuses on problems of inductive generalization from limited data -- learning concepts and word meanings, inferring causal relations or goals -- and learning abstract knowledge that supports these inductive leaps in the form of probabilistic generative models or 'intuitive theories'. He has also developed several novel machine learning methods inspired by human learning and perception, most notably Isomap, an approach to unsupervised learning of nonlinear manifolds in high-dimensional data. He has been Associate Editor for the journal Cognitive Science, has been active on program committees for the CogSci and NIPS conferences, and has co-organized a number of workshops, tutorials and summer schools in human and machine learning. Several of his papers have received outstanding paper awards or best student paper awards at the IEEE Computer Vision and Pattern Recognition (CVPR), NIPS, and Cognitive Science conferences. He is the recipient of the New Investigator Award from the Society for Mathematical Psychology (2005), the Early Investigator Award from the Society of Experimental Psychologists (2007), and the Distinguished Scientific Award for Early Career Contribution to Psychology (in the area of cognition and human learning) from the American Psychological Association (2008).

More from the Same Authors