Timezone: »
Much progress has been made on end-to-end learning for physical understanding and reasoning. If successful, understanding and reasoning about the physical world promises far-reaching applications in robotics, machine vision, and the physical sciences. Despite this recent progress, our best artificial systems pale in comparison to the flexibility and generalization of human physical reasoning.
Neural information processing systems have shown promising empirical results on synthetic datasets, yet do not transfer well when deployed in novel scenarios (including the physical world). If physical understanding and reasoning techniques are to play a broader role in the physical world, they must be able to function across a wide variety of scenarios, including ones that might lie outside the training distribution. How can we design systems that satisfy these criteria?
Our workshop aims to investigate this broad question by bringing together experts from machine learning, the physical sciences, cognitive and developmental psychology, and robotics to investigate how these techniques may one day be employed in the real world. In particular, we aim to investigate the following questions: 1. What forms of inductive biases best enable the development of physical understanding techniques that are applicable to real-world problems? 2. How do we ensure that the outputs of a physical reasoning module are reasonable and physically plausible? 3. Is interpretability a necessity for physical understanding and reasoning techniques to be suitable to real-world problems?
Unlike end-to-end neural architectures that distribute bias across a large set of parameters, modern structured physical reasoning modules (differentiable physics, relational learning, probabilistic programming) maintain modularity and physical interpretability. We will discuss how these inductive biases might aid in generalization and interpretability, and how these techniques impact real-world problems.
Tue 8:00 a.m. - 8:15 a.m.
|
Introductory remarks
(
Live talk
)
SlidesLive Video » |
🔗 |
Tue 8:15 a.m. - 8:45 a.m.
|
Tomer Ullman
(
Live talk
)
SlidesLive Video » |
🔗 |
Tue 8:45 a.m. - 9:15 a.m.
|
Nils Thuerey
(
Live talk
)
SlidesLive Video » |
Nils Thuerey 🔗 |
Tue 9:15 a.m. - 9:45 a.m.
|
Karen Liu
(
Live talk
)
SlidesLive Video » |
Karen Liu 🔗 |
Tue 10:30 a.m. - 10:40 a.m.
|
Playful Interactions for Representation Learning
(
Oral
)
SlidesLive Video » One of the key challenges in visual imitation learning is collecting large amounts of expert demonstrations for a given task. While methods for collecting human demonstrations are becoming easier with teleoperation methods and the use of low-cost assistive tools, we often still require 100-1000 demonstrations for every task to learn a visual representation and policy. To address this, we turn to an alternate form of data that does not require task-specific demonstrations -- play. Playing is a fundamental method children use to learn a set of skills and behaviors and visual representations in early learning. Importantly, play data is diverse, task-agnostic, and relatively cheap to obtain. In this work, we propose to use playful interactions in a self-supervised manner to learn visual representations for downstream tasks. We collect 2 hours of playful data in 19 diverse environments and use self-predictive learning to extract visual representations. Given these representations, we train policies using imitation learning for two downstream tasks: Pushing and Stacking. Our representations, which are trained from scratch, compare favorably against ImageNet pretrained representations. Finally, we provide an experimental analysis on the effects of different pretraining modes on downstream task learning. |
Sarah Young · Pieter Abbeel · Lerrel Pinto 🔗 |
Tue 10:40 a.m. - 10:50 a.m.
|
Efficient and Interpretable Robot Manipulation with Graph Neural Networks
(
Oral
)
SlidesLive Video » Manipulation tasks like loading a dishwasher can be seen as a sequence of spatial constraints and relationships between different objects. We aim to discover these rules from demonstrations by posing manipulation as a classification problem over a graph, whose nodes represent task-relevant entities like objects and goals. In our experiments, a single GNN policy trained using imitation learning (IL) on 20 expert demonstrations can solve blockstacking and rearrangement tasks in both simulation and on hardware, generalizing over the number of objects and goal configurations. These experiments show that graphical IL can solve complex long-horizon manipulation problems without requiring detailed task descriptions. |
Yixin Lin · Austin Wang · Eric Undersander · Akshara Rai 🔗 |
Tue 10:50 a.m. - 11:00 a.m.
|
Vision-based system identification and 3D keypoint discovery using dynamics constraints
(
Oral
)
SlidesLive Video » This paper introduces V-SysId, a novel method that enables simultaneous keypoint discovery, 3D system identification, and extrinsic camera calibration from an unlabeled video taken from a static camera, using only the family of equations of motion of the object of interest as weak supervision. V-SysId takes keypoint trajectory proposals and alternates between maximum likelihood parameter estimation and extrinsic camera calibration, before applying a suitable selection criterion to identify the track of interest. This is then used to train a keypoint tracking model using supervised learning. Results on a range of settings (robotics, physics, physiology) highlight the utility of this approach. |
Miguel Jaques · Martin Asenov · Michael Burke · Timothy Hospedales 🔗 |
Tue 11:00 a.m. - 11:02 a.m.
|
3D Neural Scene Representations for Visuomotor Control
(
Spotlight
)
SlidesLive Video » Humans have a strong intuitive understanding of the 3D environment around us. The mental model of the physics in our brain applies to objects of different materials and enables us to perform a wide range of manipulation tasks that are far beyond the reach of current robots. In this work, we desire to learn models for dynamic 3D scenes purely from 2D visual observations. Our model combines Neural Radiance Fields (NeRF) and time contrastive learning with an autoencoding framework, which learns viewpoint-invariant 3D-aware scene representations. We show that a dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks involving both rigid bodies and fluids, where the target is specified in a viewpoint different from what the robot operates on. When coupled with an auto-decoding framework, it can even support goal specification from camera viewpoints that are outside the training distribution. We further demonstrate the richness of the learned 3D dynamics model by performing future prediction and novel view synthesis. Finally, we provide detailed ablation studies regarding different system designs and qualitative analysis of the learned representations. |
Yunzhu Li · Shuang Li · Vincent Sitzmann · Pulkit Agrawal · Antonio Torralba 🔗 |
Tue 11:02 a.m. - 11:04 a.m.
|
Learning Graph Search Heuristics
(
Spotlight
)
SlidesLive Video » Searching for a path between two nodes in a graph is one of the most well-studied and fundamental problems in computer science. In numerous domains such as robotics, AI, or biology, practitioners develop search heuristics to accelerate their pathfinding algorithms. However, it is a laborious and complex process to hand-design heuristics based on the problem and the structure of a given use case. Here we present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigation heuristics from data by leveraging recent advances in imitation learning and graph representation learning. At training time, we aggregate datasets of search trajectories and ground-truth shortest path distances, which we use to train a specialized graph neural network-based heuristic function using backpropagation through steps of the pathfinding process. Our heuristic function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time. Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 40.8% on average and allows for fast planning in time-critical robotics domains. |
Michal Pándy · Rex Ying · Gabriele Corso · Petar Veličković · Jure Leskovec · Pietro Liò 🔗 |
Tue 11:04 a.m. - 11:06 a.m.
|
Efficient Partial Simulation Quantitatively Explains Deviations from Optimal Physical Predictions
(
Spotlight
)
SlidesLive Video » Humans are adept at planning actions in real-time dynamic physical environments. Machine intelligence struggles with this task, and one cause is that running simulators of complex, real-world environments is computationally expensive. Yet recent accounts suggest that humans use mental simulation in order to make intuitive physical judgments. How is human physical reasoning so accurate, while maintaining computational tractability? We suggest that human behavior is well described by partial simulation, which moves forward in time only parts of the world deemed relevant. We take as a case study Ludwin-Peery, Bramley, Davis, and Gureckis (2020), in which a conjunction fallacy was found in the domain of intuitive physics. This phenomenon is difficult to explain with full simulation, but we show it can be quantitatively accounted for with partial simulation. We discuss how AI research could make use of efficient partial simulation in implementations of commonsense physical reasoning. |
Ilona Bass · Kevin Smith · Elizabeth Bonawitz · Tomer Ullman 🔗 |
Tue 11:06 a.m. - 11:08 a.m.
|
TorchDyn: Implicit Models and Neural Numerical Methods in PyTorch
(
Spotlight
)
SlidesLive Video » Computation in traditional deep learning models is directly determined by the explicit linking of select primitives e.g. layers or blocks arranged in a computational graph. Implicit neural models follow instead a declarative approach; a desiderata is encoded into constraints and a numerical method is applied to solve the resulting optimization problem as part of the inference pass. Existing open-source frameworks focus on explicit models and do not offer implementations of the numerical routines required to study and benchmark implicit models. We introduce TorchDyn, a PyTorch library fully tailored to implicit learning. TorchDyn primitives are categorized into numerical and sensitivity methods and model classes, with pre-existing implementations that can be combined and repurposed to obtain complex compositional implicit architectures. TorchDyn further offers a collection step-by-step tutorials and benchmarks designed to accelerate research and improve the robustness of experimental evaluations for implicit models. |
Michael Poli · Stefano Massaroli · Atsushi Yamashita · Hajime Asama · Jinkyoo Park · Stefano Ermon 🔗 |
Tue 11:08 a.m. - 11:10 a.m.
|
3D-OES: Viewpoint-Invariant Object-FactorizedEnvironment Simulators
(
Spotlight
)
SlidesLive Video » We propose an action-conditioned dynamics model that predicts scene changes caused by object and agent interactions in a viewpoint-invariant 3D neural scene representation space, inferred from RGB-D videos. In this 3D feature space, objects do not interfere with one another and their appearance persists over time and across viewpoints. This permits our model to predict future scenes long in the future by simply “moving" 3D object features based on cumulative object motion predictions. Object motion predictions are computed by a graph neural network that operates over the object features extracted from the 3D neural scene representation. Our model generalizes well across varying number and appearances of interacting objects as well as across camera viewpoints, outperforming existing 2D and 3D dynamics models, and enables successful sim-to-real transfer. |
Hsiao-Yu Tung · Zhou Xian · Mihir Prabhudesai · Katerina Fragkiadaki 🔗 |
Tue 11:10 a.m. - 11:12 a.m.
|
DLO@Scale: A Large-scale Meta Dataset for Learning Non-rigid Object Pushing Dynamics
(
Spotlight
)
SlidesLive Video » The ability to quickly understand our physical environment and make predictions about interacting objects is fundamental to us humans. To equip artificial agents with similar reasoning capabilities, machine learning can be used to approximate the underlying state dynamics of a system. In this regard, deep learning has gained much popularity yet relying on the availability of large-enough datasets. In this work, we present DLO@Scale, a new dataset for studying future state prediction in the context of multi-body deformable linear object pushing. We provide a large collection of 100 million simulated physical interactions enabling thorough statistical analysis and algorithmic benchmarks. Our data captures complex mechanical phenomena such as elasticity, plastic deformation and friction. An important aspect is the large variation of the physical parameters making it also suitable for testing meta learning algorithms. We describe DLO@Scale in detail and present a first empirical evaluation using neural network baselines. |
Robert Gieselmann · Danica Kragic · Florian T. Pokorny · Alberta Longhini 🔗 |
Tue 11:12 a.m. - 11:14 a.m.
|
AVoE: A Synthetic 3D Dataset on Understanding Violation of Expectation for Artificial Cognition
(
Spotlight
)
SlidesLive Video » Recent work in cognitive reasoning and computer vision has engendered an increasing popularity for the Violation-of-Expectation (VoE) paradigm in synthetic datasets. Inspired by work in infant psychology, researchers have started evaluating a model's ability to discriminate between expected and surprising scenes as a sign of its reasoning ability. Existing VoE-based 3D datasets in physical reasoning only provide vision data. However, current cognitive models of physical reasoning by psychologists reveal infants create high-level abstract representations of objects and interactions. Capitalizing on this knowledge, we propose AVoE: a synthetic 3D VoE-based dataset that presents stimuli from multiple novel sub-categories for five event categories of physical reasoning. Compared to existing work, AVoE is armed with ground-truth labels of abstract features and rules augmented to vision data, paving the way for high-level symbolic predictions in physical reasoning tasks. |
Arijit Dasgupta · Jiafei Duan · Marcelo Ang Jr · Cheston Tan 🔗 |
Tue 11:14 a.m. - 11:16 a.m.
|
Physics-guided Learning-based Adaptive Control on the SE(3) Manifold
(
Spotlight
)
SlidesLive Video »
In real-world robotics applications, accurate models of robot dynamics are critical for safe and stable control in rapidly changing operational conditions. This motivates the use of machine learning techniques to approximate robot dynamics and their disturbances over a training set of state-control trajectories. This paper demonstrates that inductive biases arising from physics laws can be used to improve the data efficiency and accuracy of the approximated dynamics model. For example, the dynamics of many robots, including ground, aerial, and underwater vehicles, are described using their $SE(3)$ pose and satisfy conservation of energy principles. We design a physically plausible model of the robot dynamics by imposing the structure of Hamilton's equations of motion in the design of a neural ordinary differential equation (ODE) network. The Hamiltonian structure guarantees satisfaction of $SE(3)$ kinematic constraints and energy conservation by construction. It also allows us to derive an energy-based adaptive controller that achieves trajectory tracking while compensating for disturbances. Our learning-based adaptive controller is verified on an under-actuated quadrotor robot.
|
Thai Duong · Nikolay Atanasov 🔗 |
Tue 11:16 a.m. - 11:18 a.m.
|
Neural NID Rules
(
Spotlight
)
SlidesLive Video » Abstract object properties and their relations are deeply rooted in human common sense, allowing people to predict the dynamics of the world even in situations that are novel but governed by familiar laws of physics. Standard machine learning models in model-based reinforcement learning are inadequate to generalize in this way. Inspired by the classic framework of noisy indeterministic deictic (NID) rules, we introduce here Neural NID, a method that learns abstract object properties and relations between objects with a suitably regularized graph neural network. We validate the greater generalization capability of Neural NID on simple benchmarks specifically designed to assess the transition dynamics learned by the model. |
Luca Viano · Johanni Brea 🔗 |
Tue 11:30 a.m. - 12:00 p.m.
|
Kelsey Allen
(
Live talk
)
SlidesLive Video » |
Kelsey Allen 🔗 |
Tue 12:00 p.m. - 12:30 p.m.
|
Kyle Cranmer
(
Live talk
)
SlidesLive Video » |
Kyle Cranmer 🔗 |
Tue 12:30 p.m. - 1:00 p.m.
|
Shuran Song
(
Live talk
)
SlidesLive Video » |
Shuran Song 🔗 |
Tue 1:00 p.m. - 2:00 p.m.
|
Industry Panel: Kenneth Tran (Koidra), Hiro Ono (NASA JPL), Aleksandra Faust (Google Brain), Michael Roberts (COVID-19 AIX-COVNET University of Cambridge)
(
Discussion Panel
)
SlidesLive Video » |
🔗 |
Tue 2:00 p.m. - 2:45 p.m.
|
Research Panel
(
Discussion Panel
)
SlidesLive Video » |
🔗 |
Tue 2:45 p.m. - 4:00 p.m.
|
Social - GatherTown
(
GatherTown Meeting
)
link »
[ protected link dropped ] |
🔗 |
Author Information
Krishna Murthy Jatavallabhula (Mila, Universite de Montreal)
Rika Antonova (Stanford University)
Rika is a postdoc at [Stanford IPRL](http://iprl.stanford.edu/#people) lab, part of NSF/CRA [CI Fellowship](https://cifellows2020.org/2020-class/) program, doing research on active learning of [transferable priors, kernels, and latent representations for robotics](https://cccblog.org/2021/05/26/active-learning-of-transferable-priors-kernels-and-latent-representations-for-robotics/). Rika completed her PhD work on [data-efficient simulation-to-reality transfer](http://kth.diva-portal.org/smash/record.jsf?pid=diva2:1476620) at the Robotics, Perception and Learning lab in KTH, Stockholm, in the group headed by Danica Kragic. Before that, Rika was a Masters student at the Robotics Institute at Carnegie Mellon University, developing Bayesian optimization approaches for learning control parameters for bipedal locomotion (with Akshara Rai and Chris Atkeson). Rika's CMU MS advisor was Emma Brunskill and in her group Rika worked on developing reinforcement learning algorithms for education. A few years earlier, Rika was a software engineer at Google, first in the Search Personalization group and then in the Character Recognition team (developing open-source OCR engine Tesseract).
Kevin Smith (MIT)
Hsiao-Yu Tung (Carnegie Mellon University)
Florian Shkurti (University of Toronto)
Jeannette Bohg (Stanford University)
Josh Tenenbaum (MIT)
Josh Tenenbaum is an Associate Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 1999, and was an Assistant Professor at Stanford University from 1999 to 2002. He studies learning and inference in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. He focuses on problems of inductive generalization from limited data -- learning concepts and word meanings, inferring causal relations or goals -- and learning abstract knowledge that supports these inductive leaps in the form of probabilistic generative models or 'intuitive theories'. He has also developed several novel machine learning methods inspired by human learning and perception, most notably Isomap, an approach to unsupervised learning of nonlinear manifolds in high-dimensional data. He has been Associate Editor for the journal Cognitive Science, has been active on program committees for the CogSci and NIPS conferences, and has co-organized a number of workshops, tutorials and summer schools in human and machine learning. Several of his papers have received outstanding paper awards or best student paper awards at the IEEE Computer Vision and Pattern Recognition (CVPR), NIPS, and Cognitive Science conferences. He is the recipient of the New Investigator Award from the Society for Mathematical Psychology (2005), the Early Investigator Award from the Society of Experimental Psychologists (2007), and the Distinguished Scientific Award for Early Career Contribution to Psychology (in the area of cognition and human learning) from the American Psychological Association (2008).
More from the Same Authors
-
2021 : ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation »
Chuang Gan · Jeremy Schwartz · Seth Alter · Damian Mrowca · Martin Schrimpf · James Traer · Julian De Freitas · Jonas Kubilius · Abhishek Bhandwaldar · Nick Haber · Megumi Sano · Kuno Kim · Elias Wang · Michael Lingelbach · Aidan Curtis · Kevin Feigelis · Daniel Bear · Dan Gutfreund · David Cox · Antonio Torralba · James J DiCarlo · Josh Tenenbaum · Josh McDermott · Dan Yamins -
2021 : Physion: Evaluating Physical Prediction from Vision in Humans and Machines »
Daniel Bear · Elias Wang · Damian Mrowca · Felix Binder · Hsiao-Yu Tung · Pramod RT · Cameron Holdaway · Sirui Tao · Kevin Smith · Fan-Yun Sun · Fei-Fei Li · Nancy Kanwisher · Josh Tenenbaum · Dan Yamins · Judith Fan -
2021 Spotlight: Learning to Compose Visual Relations »
Nan Liu · Shuang Li · Yilun Du · Josh Tenenbaum · Antonio Torralba -
2021 Spotlight: Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering »
Vincent Sitzmann · Semon Rezchikov · Bill Freeman · Josh Tenenbaum · Fredo Durand -
2021 : STAR: A Benchmark for Situated Reasoning in Real-World Videos »
Bo Wu · Shoubin Yu · Zhenfang Chen · Josh Tenenbaum · Chuang Gan -
2021 : Dynamic Environments with Deformable Objects »
Rika Antonova · peiyang shi · Hang Yin · Zehang Weng · Danica Kragic -
2021 : AutumnSynth: Synthesis of Reactive Programs with Structured Latent State »
Ria Das · Zenna Tavares · Josh Tenenbaum · Armando Solar-Lezama -
2021 : Noether Networks: Meta-Learning Useful Conserved Quantities »
Ferran Alet · Dylan Doblar · Allan Zhou · Josh Tenenbaum · Kenji Kawaguchi · Chelsea Finn -
2021 : Synthesis of Reactive Programs with Structured Latent State »
Ria Das · Zenna Tavares · Armando Solar-Lezama · Josh Tenenbaum -
2021 : Towards Incorporating Rich Social Interactions Into MDPs »
Ravi Tejwani · Yen-Ling Kuo · Tianmin Shu · Bennett Stankovits · Dan Gutfreund · Josh Tenenbaum · Boris Katz · Andrei Barbu -
2021 : Learning to solve complex tasks by growing knowledge culturally across generations »
Michael Tessler · Jason Madeano · Pedro Tsividis · Noah Goodman · Josh Tenenbaum -
2022 Poster: Learning Physical Dynamics with Subequivariant Graph Neural Networks »
Jiaqi Han · Wenbing Huang · Hengbo Ma · Jiachen Li · Josh Tenenbaum · Chuang Gan -
2022 : Planning with Large Language Models for Code Generation »
Shun Zhang · Zhenfang Chen · Yikang Shen · Mingyu Ding · Josh Tenenbaum · Chuang Gan -
2022 : Is Conditional Generative Modeling all you need for Decision-Making? »
Anurag Ajay · Yilun Du · Abhi Gupta · Josh Tenenbaum · Tommi Jaakkola · Pulkit Agrawal -
2022 : Fifteen-minute Competition Overview Video »
Tianpei Yang · Iuliia Kotseruba · Montgomery Alban · Amir Rasouli · Soheil Mohamad Alizadeh Shabestary · Randolph Goebel · Matthew Taylor · Liam Paull · Florian Shkurti -
2023 Poster: What’s Left: Concept Grounding with Large Language Models »
Joy Hsu · Jiayuan Mao · Josh Tenenbaum · Jiajun Wu -
2023 Poster: Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision »
Ayush Tewari · Tianwei Yin · George Cazenavette · Semon Rezchikov · Josh Tenenbaum · Fredo Durand · Bill Freeman · Vincent Sitzmann -
2023 Poster: Inferring the Future by Imagining the Past »
Kartik Chandra · Tony Chen · Tzu-Mao Li · Jonathan Ragan-Kelley · Josh Tenenbaum -
2023 Poster: What Planning Problems Can A Relational Neural Network Solve? »
Jiayuan Mao · Tomás Lozano-Pérez · Josh Tenenbaum · Leslie Kaelbling -
2023 Poster: Hierarchical Planning with Foundation Models »
Anurag Ajay · Seungwook Han · Yilun Du · Shuang Li · Abhi Gupta · Tommi Jaakkola · Josh Tenenbaum · Leslie Kaelbling · Akash Srivastava · Pulkit Agrawal -
2023 Poster: Learning Universal Policies via Text-Guided Video Generation »
Yilun Du · Mengjiao (Sherry) Yang · Bo Dai · Hanjun Dai · Ofir Nachum · Josh Tenenbaum · Dale Schuurmans · Pieter Abbeel -
2023 Poster: Spatiotemporal sequence learning as probabilistic program induction »
Tracey Mills · Samuel Cheyette · Josh Tenenbaum -
2023 Poster: 3D-IntPhys: Towards More Generalized 3D-grounded Visual Intuitive Physics under Challenging Scenes »
Haotian Xue · Antonio Torralba · Josh Tenenbaum · Dan Yamins · Yunzhu Li · Hsiao-Yu Tung -
2023 Poster: DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models »
Tsun-Hsuan Johnson Wang · Juntian Zheng · Pingchuan Ma · Yilun Du · Byungchul Kim · Andrew Spielberg · Josh Tenenbaum · Chuang Gan · Daniela Rus -
2023 Poster: Physion++: Evaluating Physical Scene Understanding that Requires Online Inference of Different Physical Properties »
Hsiao-Yu Tung · Mingyu Ding · Zhenfang Chen · Daniel Bear · Chuang Gan · Josh Tenenbaum · Dan Yamins · Judith Fan · Kevin Smith -
2023 Oral: DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models »
Tsun-Hsuan Johnson Wang · Juntian Zheng · Pingchuan Ma · Yilun Du · Byungchul Kim · Andrew Spielberg · Josh Tenenbaum · Chuang Gan · Daniela Rus -
2023 Workshop: Foundation Models for Decision Making »
Mengjiao (Sherry) Yang · Ofir Nachum · Yilun Du · Stephen McAleer · Igor Mordatch · Linxi Fan · Jeannette Bohg · Dale Schuurmans -
2022 Spotlight: Learning Physical Dynamics with Subequivariant Graph Neural Networks »
Jiaqi Han · Wenbing Huang · Hengbo Ma · Jiachen Li · Josh Tenenbaum · Chuang Gan -
2022 Spotlight: Lightning Talks 4B-1 »
Alexandra Senderovich · Zhijie Deng · Navid Ansari · Xuefei Ning · Yasmin Salehi · Xiang Huang · Chenyang Wu · Kelsey Allen · Jiaqi Han · Nikita Balagansky · Tatiana Lopez-Guevara · Tianci Li · Zhanhong Ye · Zixuan Zhou · Feng Zhou · Ekaterina Bulatova · Daniil Gavrilov · Wenbing Huang · Dennis Giannacopoulos · Hans-peter Seidel · Anton Obukhov · Kimberly Stachenfeld · Hongsheng Liu · Jun Zhu · Junbo Zhao · Hengbo Ma · Nima Vahidi Ferdowsi · Zongzhang Zhang · Vahid Babaei · Jiachen Li · Alvaro Sanchez Gonzalez · Yang Yu · Shi Ji · Maxim Rakhuba · Tianchen Zhao · Yiping Deng · Peter Battaglia · Josh Tenenbaum · Zidong Wang · Chuang Gan · Changcheng Tang · Jessica Hamrick · Kang Yang · Tobias Pfaff · Yang Li · Shuang Liang · Min Wang · Huazhong Yang · Haotian CHU · Yu Wang · Fan Yu · Bei Hua · Lei Chen · Bin Dong -
2022 Competition: Driving SMARTS »
Amir Rasouli · Matthew Taylor · Iuliia Kotseruba · Tianpei Yang · Randolph Goebel · Soheil Mohamad Alizadeh Shabestary · Montgomery Alban · Florian Shkurti · Liam Paull -
2022 Poster: 3D Concept Grounding on Neural Fields »
Yining Hong · Yilun Du · Chunru Lin · Josh Tenenbaum · Chuang Gan -
2022 Poster: PDSketch: Integrated Domain Programming, Learning, and Planning »
Jiayuan Mao · Tomás Lozano-Pérez · Josh Tenenbaum · Leslie Kaelbling -
2022 Poster: Drawing out of Distribution with Neuro-Symbolic Generative Models »
Yichao Liang · Josh Tenenbaum · Tuan Anh Le · Siddharth N -
2022 Poster: Learning Neural Acoustic Fields »
Andrew Luo · Yilun Du · Michael Tarr · Josh Tenenbaum · Antonio Torralba · Chuang Gan -
2022 Poster: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment »
Zhijing Jin · Sydney Levine · Fernando Gonzalez Adauto · Ojasv Kamal · Maarten Sap · Mrinmaya Sachan · Rada Mihalcea · Josh Tenenbaum · Bernhard Schölkopf -
2022 Poster: HandMeThat: Human-Robot Communication in Physical and Social Environments »
Yanming Wan · Jiayuan Mao · Josh Tenenbaum -
2022 Poster: Communicating Natural Programs to Humans and Machines »
Sam Acquaviva · Yewen Pu · Marta Kryven · Theodoros Sechopoulos · Catherine Wong · Gabrielle Ecanow · Maxwell Nye · Michael Tessler · Josh Tenenbaum -
2021 : Spotlight Talk: Learning to solve complex tasks by growing knowledge culturally across generations »
Noah Goodman · Josh Tenenbaum · Michael Tessler · Jason Madeano -
2021 : Efficient Partial Simulation Quantitatively Explains Deviations from Optimal Physical Predictions »
Ilona Bass · Kevin Smith · Elizabeth Bonawitz · Tomer Ullman -
2021 Workshop: 2nd Workshop on Self-Supervised Learning: Theory and Practice »
Pengtao Xie · Ishan Misra · Pulkit Agrawal · Abdelrahman Mohamed · Shentong Mo · Youwei Liang · Jeannette Bohg · Kristina N Toutanova -
2021 Poster: Learning to Compose Visual Relations »
Nan Liu · Shuang Li · Yilun Du · Josh Tenenbaum · Antonio Torralba -
2021 Poster: Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning »
Maxwell Nye · Michael Tessler · Josh Tenenbaum · Brenden Lake -
2021 Poster: Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language »
Mingyu Ding · Zhenfang Chen · Tao Du · Ping Luo · Josh Tenenbaum · Chuang Gan -
2021 Poster: Learning Signal-Agnostic Manifolds of Neural Fields »
Yilun Du · Katie Collins · Josh Tenenbaum · Vincent Sitzmann -
2021 Poster: Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering »
Vincent Sitzmann · Semon Rezchikov · Bill Freeman · Josh Tenenbaum · Fredo Durand -
2021 Poster: Grammar-Based Grounded Lexicon Learning »
Jiayuan Mao · Freda Shi · Jiajun Wu · Roger Levy · Josh Tenenbaum -
2021 Poster: Unsupervised Learning of Compositional Energy Concepts »
Yilun Du · Shuang Li · Yash Sharma · Josh Tenenbaum · Igor Mordatch -
2021 Poster: A Bayesian-Symbolic Approach to Reasoning and Learning in Intuitive Physics »
Kai Xu · Akash Srivastava · Dan Gutfreund · Felix Sosa · Tomer Ullman · Josh Tenenbaum · Charles Sutton -
2021 Poster: PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning »
Yining Hong · Li Yi · Josh Tenenbaum · Antonio Torralba · Chuang Gan -
2021 Poster: Noether Networks: meta-learning useful conserved quantities »
Ferran Alet · Dylan Doblar · Allan Zhou · Josh Tenenbaum · Kenji Kawaguchi · Chelsea Finn -
2021 Poster: 3DP3: 3D Scene Perception via Probabilistic Programming »
Nishad Gothoskar · Marco Cusumano-Towner · Ben Zinberg · Matin Ghavamizadeh · Falk Pollok · Austin Garrett · Josh Tenenbaum · Dan Gutfreund · Vikash Mansinghka -
2021 : ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation »
Chuang Gan · Jeremy Schwartz · Seth Alter · Damian Mrowca · Martin Schrimpf · James Traer · Julian De Freitas · Jonas Kubilius · Abhishek Bhandwaldar · Nick Haber · Megumi Sano · Kuno Kim · Elias Wang · Michael Lingelbach · Aidan Curtis · Kevin Feigelis · Daniel Bear · Dan Gutfreund · David Cox · Antonio Torralba · James J DiCarlo · Josh Tenenbaum · Josh McDermott · Dan Yamins -
2020 : Jeanette Bohg - One the Role of Hierarchies for Learning Manipulation Skills »
Jeannette Bohg -
2020 Workshop: Workshop on Computer Assisted Programming (CAP) »
Augustus Odena · Charles Sutton · Nadia Polikarpova · Josh Tenenbaum · Armando Solar-Lezama · Isil Dillig -
2020 : Invited Talk: Growing into intelligence the human way: What do we start with, and how do we learn the rest? »
Josh Tenenbaum -
2020 : Discussion Panel »
Pete Florence · Dorsa Sadigh · Carolina Parada · Jeannette Bohg · Roberto Calandra · Peter Stone · Fabio Ramos -
2020 : Panel Discussion »
Jessica Hamrick · Klaus Greff · Michelle A. Lee · Irina Higgins · Josh Tenenbaum -
2020 Workshop: KR2ML - Knowledge Representation and Reasoning Meets Machine Learning »
Veronika Thost · Kartik Talamadupula · Vivek Srikumar · Chenwei Zhang · Josh Tenenbaum -
2020 Workshop: Differentiable computer vision, graphics, and physics in machine learning »
Krishna Murthy Jatavallabhula · Kelsey Allen · Victoria Dean · Johanna Hansen · Shuran Song · Florian Shkurti · Liam Paull · Derek Nowrouzezahrai · Josh Tenenbaum -
2020 : Opening remarks »
Krishna Murthy Jatavallabhula · Kelsey Allen · Johanna Hansen · Victoria Dean -
2020 Poster: Online Bayesian Goal Inference for Boundedly Rational Planning Agents »
Tan Zhi-Xuan · Jordyn Mann · Tom Silver · Josh Tenenbaum · Vikash Mansinghka -
2020 Poster: Program Synthesis with Pragmatic Communication »
Yewen Pu · Kevin Ellis · Marta Kryven · Josh Tenenbaum · Armando Solar-Lezama -
2020 Poster: Learning Compositional Rules via Neural Program Synthesis »
Maxwell Nye · Armando Solar-Lezama · Josh Tenenbaum · Brenden Lake -
2020 Poster: Learning abstract structure for drawing by efficient motor program induction »
Lucas Tian · Kevin Ellis · Marta Kryven · Josh Tenenbaum -
2020 Oral: Learning abstract structure for drawing by efficient motor program induction »
Lucas Tian · Kevin Ellis · Marta Kryven · Josh Tenenbaum -
2020 Poster: Multi-Plane Program Induction with 3D Box Priors »
Yikai Li · Jiayuan Mao · Xiuming Zhang · Bill Freeman · Josh Tenenbaum · Noah Snavely · Jiajun Wu -
2020 Poster: Learning Physical Graph Representations from Visual Scenes »
Daniel Bear · Chaofei Fan · Damian Mrowca · Yunzhu Li · Seth Alter · Aran Nayebi · Jeremy Schwartz · Li Fei-Fei · Jiajun Wu · Josh Tenenbaum · Daniel Yamins -
2020 Oral: Learning Physical Graph Representations from Visual Scenes »
Daniel Bear · Chaofei Fan · Damian Mrowca · Yunzhu Li · Seth Alter · Aran Nayebi · Jeremy Schwartz · Li Fei-Fei · Jiajun Wu · Josh Tenenbaum · Daniel Yamins -
2019 : Panel Discussion »
Linda Smith · Josh Tenenbaum · Lisa Anne Hendricks · James McClelland · Timothy Lillicrap · Jesse Thomason · Jason Baldridge · Louis-Philippe Morency -
2019 : Josh Tenenbaum »
Josh Tenenbaum -
2019 : Panel »
Sanja Fidler · Josh Tenenbaum · Tatiana López-Guevara · Danilo Jimenez Rezende · Niloy Mitra -
2019 : Poster Session »
Ethan Harris · Tom White · Oh Hyeon Choung · Takashi Shinozaki · Dipan Pal · Katherine L. Hermann · Judy Borowski · Camilo Fosco · Chaz Firestone · Vijay Veerabadran · Benjamin Lahner · Chaitanya Ryali · Fenil Doshi · Pulkit Singh · Sharon Zhou · Michel Besserve · Michael Chang · Anelise Newman · Mahesan Niranjan · Jonathon Hare · Daniela Mihai · Marios Savvides · Simon Kornblith · Christina M Funke · Aude Oliva · Virginia de Sa · Dmitry Krotov · Colin Conwell · George Alvarez · Alex Kolchinski · Shengjia Zhao · Mitchell Gordon · Michael Bernstein · Stefano Ermon · Arash Mehrjou · Bernhard Schölkopf · John Co-Reyes · Michael Janner · Jiajun Wu · Josh Tenenbaum · Sergey Levine · Yalda Mohsenzadeh · Zhenglong Zhou -
2019 : Josh Tenenbaum »
Josh Tenenbaum -
2019 Poster: Write, Execute, Assess: Program Synthesis with a REPL »
Kevin Ellis · Maxwell Nye · Yewen Pu · Felix Sosa · Josh Tenenbaum · Armando Solar-Lezama -
2019 Poster: ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models »
Andrei Barbu · David Mayo · Julian Alverio · William Luo · Christopher Wang · Dan Gutfreund · Josh Tenenbaum · Boris Katz -
2019 Poster: Modeling Expectation Violation in Intuitive Physics with Coarse Probabilistic Object Representations »
Kevin Smith · Lingjie Mei · Shunyu Yao · Jiajun Wu · Elizabeth Spelke · Josh Tenenbaum · Tomer Ullman -
2019 Poster: Visual Concept-Metaconcept Learning »
Chi Han · Jiayuan Mao · Chuang Gan · Josh Tenenbaum · Jiajun Wu -
2019 Poster: Finding Friend and Foe in Multi-Agent Games »
Jack Serrino · Max Kleiman-Weiner · David Parkes · Josh Tenenbaum -
2019 Spotlight: Finding Friend and Foe in Multi-Agent Games »
Jack Serrino · Max Kleiman-Weiner · David Parkes · Josh Tenenbaum -
2018 : Talk 7: Jeannette Bohg - On perceptual representations and how they interact with actions and physical models »
Jeannette Bohg -
2018 : Opening Remarks: Josh Tenenbaum »
Josh Tenenbaum -
2018 Workshop: Modeling the Physical World: Learning, Perception, and Control »
Jiajun Wu · Kelsey Allen · Kevin Smith · Jessica Hamrick · Emmanuel Dupoux · Marc Toussaint · Josh Tenenbaum -
2018 Poster: Learning to Reconstruct Shapes from Unseen Classes »
Xiuming Zhang · Zhoutong Zhang · Chengkai Zhang · Josh Tenenbaum · Bill Freeman · Jiajun Wu -
2018 Poster: Learning to Infer Graphics Programs from Hand-Drawn Images »
Kevin Ellis · Daniel Ritchie · Armando Solar-Lezama · Josh Tenenbaum -
2018 Poster: Learning Libraries of Subroutines for Neurally–Guided Bayesian Program Induction »
Kevin Ellis · Lucas Morales · Mathias Sablé-Meyer · Armando Solar-Lezama · Josh Tenenbaum -
2018 Oral: Learning to Reconstruct Shapes from Unseen Classes »
Xiuming Zhang · Zhoutong Zhang · Chengkai Zhang · Josh Tenenbaum · Bill Freeman · Jiajun Wu -
2018 Spotlight: Learning to Infer Graphics Programs from Hand-Drawn Images »
Kevin Ellis · Daniel Ritchie · Armando Solar-Lezama · Josh Tenenbaum -
2018 Spotlight: Learning Libraries of Subroutines for Neurally–Guided Bayesian Program Induction »
Kevin Ellis · Lucas Morales · Mathias Sablé-Meyer · Armando Solar-Lezama · Josh Tenenbaum -
2018 Poster: Visual Object Networks: Image Generation with Disentangled 3D Representations »
Jun-Yan Zhu · Zhoutong Zhang · Chengkai Zhang · Jiajun Wu · Antonio Torralba · Josh Tenenbaum · Bill Freeman -
2018 Poster: Learning to Share and Hide Intentions using Information Regularization »
DJ Strouse · Max Kleiman-Weiner · Josh Tenenbaum · Matt Botvinick · David Schwab -
2018 Poster: Learning to Exploit Stability for 3D Scene Parsing »
Yilun Du · Zhijian Liu · Hector Basevi · Ales Leonardis · Bill Freeman · Josh Tenenbaum · Jiajun Wu -
2018 Poster: End-to-End Differentiable Physics for Learning and Control »
Filipe de Avila Belbute Peres · Kevin Smith · Kelsey Allen · Josh Tenenbaum · J. Zico Kolter -
2018 Spotlight: End-to-End Differentiable Physics for Learning and Control »
Filipe de Avila Belbute Peres · Kevin Smith · Kelsey Allen · Josh Tenenbaum · J. Zico Kolter -
2018 Poster: Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding »
Kexin Yi · Jiajun Wu · Chuang Gan · Antonio Torralba · Pushmeet Kohli · Josh Tenenbaum -
2018 Poster: 3D-Aware Scene Manipulation via Inverse Graphics »
Shunyu Yao · Tzu Ming Hsu · Jun-Yan Zhu · Jiajun Wu · Antonio Torralba · Bill Freeman · Josh Tenenbaum -
2018 Spotlight: Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding »
Kexin Yi · Jiajun Wu · Chuang Gan · Antonio Torralba · Pushmeet Kohli · Josh Tenenbaum -
2018 Poster: Flexible neural representation for physics prediction »
Damian Mrowca · Chengxu Zhuang · Elias Wang · Nick Haber · Li Fei-Fei · Josh Tenenbaum · Daniel Yamins -
2017 : Panel Discussion »
Matt Botvinick · Emma Brunskill · Marcos Campos · Jan Peters · Doina Precup · David Silver · Josh Tenenbaum · Roy Fox -
2017 : Learn to learn high-dimensional models from few examples »
Josh Tenenbaum -
2017 : Welcome: Josh Tenenbaum »
Josh Tenenbaum -
2017 Workshop: Learning Disentangled Features: from Perception to Control »
Emily Denton · Siddharth Narayanaswamy · Tejas Kulkarni · Honglak Lee · Diane Bouchacourt · Josh Tenenbaum · David Pfau -
2017 : Panel: "How can we characterise the landscape of intelligent systems and locate human-like intelligence in it?" »
Josh Tenenbaum · Gary Marcus · Katja Hofmann -
2017 : Joshua Tenenbaum: 'Types of intelligence: why human-like AI is important' »
Josh Tenenbaum -
2017 Spotlight: Self-supervised Learning of Motion Capture »
Hsiao-Yu Tung · Hsiao-Wei Tung · Ersin Yumer · Katerina Fragkiadaki -
2017 Spotlight: Shape and Material from Sound »
Zhoutong Zhang · Qiujia Li · Zhengjia Huang · Jiajun Wu · Josh Tenenbaum · Bill Freeman -
2017 Spotlight: Scene Physics Acquisition via Visual De-animation »
Jiajun Wu · Erika Lu · Pushmeet Kohli · Bill Freeman · Josh Tenenbaum -
2017 Poster: Learning to See Physics via Visual De-animation »
Jiajun Wu · Erika Lu · Pushmeet Kohli · Bill Freeman · Josh Tenenbaum -
2017 Poster: Shape and Material from Sound »
Zhoutong Zhang · Qiujia Li · Zhengjia Huang · Jiajun Wu · Josh Tenenbaum · Bill Freeman -
2017 Poster: Self-supervised Learning of Motion Capture »
Hsiao-Yu Tung · Hsiao-Wei Tung · Ersin Yumer · Katerina Fragkiadaki -
2017 Poster: MarrNet: 3D Shape Reconstruction via 2.5D Sketches »
Jiajun Wu · Yifan Wang · Tianfan Xue · Xingyuan Sun · Bill Freeman · Josh Tenenbaum -
2017 Poster: Self-Supervised Intrinsic Image Decomposition »
Michael Janner · Jiajun Wu · Tejas Kulkarni · Ilker Yildirim · Josh Tenenbaum -
2017 Tutorial: Engineering and Reverse-Engineering Intelligence Using Probabilistic Programs, Program Induction, and Deep Learning »
Josh Tenenbaum · Vikash Mansinghka -
2016 : Datasets, Methodology, and Challenges in Intuitive Physics »
Emmanuel Dupoux · Josh Tenenbaum -
2016 : Josh Tenenbaum »
Josh Tenenbaum -
2016 : Reverse engineering human cooperation (or, How to build machines that treat people like people) »
Josh Tenenbaum · Max Kleiman-Weiner -
2016 : Naive Physics 101: A Tutorial »
Emmanuel Dupoux · Josh Tenenbaum -
2016 : Opening Remarks »
Josh Tenenbaum -
2016 Workshop: Intuitive Physics »
Adam Lerer · Jiajun Wu · Josh Tenenbaum · Emmanuel Dupoux · Rob Fergus -
2016 Poster: Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation »
Tejas Kulkarni · Karthik Narasimhan · Ardavan Saeedi · Josh Tenenbaum -
2016 Poster: Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling »
Jiajun Wu · Chengkai Zhang · Tianfan Xue · Bill Freeman · Josh Tenenbaum -
2016 Poster: Sampling for Bayesian Program Learning »
Kevin Ellis · Armando Solar-Lezama · Josh Tenenbaum -
2016 Poster: Probing the Compositionality of Intuitive Functions »
Eric Schulz · Josh Tenenbaum · David Duvenaud · Maarten Speekenbrink · Samuel J Gershman -
2015 Workshop: Black box learning and inference »
Josh Tenenbaum · Jan-Willem van de Meent · Tejas Kulkarni · S. M. Ali Eslami · Brooks Paige · Frank Wood · Zoubin Ghahramani -
2015 : Discussion Panel with Morning Speakers (Day 1) »
Pedro Domingos · Stephen H Muggleton · Rina Dechter · Josh Tenenbaum -
2015 : Cognitive Foundations for Common-Sense Knowledge Representation and Reasoning »
Josh Tenenbaum -
2015 Poster: Softstar: Heuristic-Guided Probabilistic Inference »
Mathew Monfort · Brenden M Lake · Brenden Lake · Brian Ziebart · Patrick Lucey · Josh Tenenbaum -
2015 Poster: Deep Convolutional Inverse Graphics Network »
Tejas Kulkarni · William Whitney · Pushmeet Kohli · Josh Tenenbaum -
2015 Spotlight: Deep Convolutional Inverse Graphics Network »
Tejas Kulkarni · William Whitney · Pushmeet Kohli · Josh Tenenbaum -
2015 Poster: Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning »
Jiajun Wu · Ilker Yildirim · Joseph Lim · Bill Freeman · Josh Tenenbaum -
2015 Poster: Unsupervised Learning by Program Synthesis »
Kevin Ellis · Armando Solar-Lezama · Josh Tenenbaum -
2014 Workshop: 3rd NIPS Workshop on Probabilistic Programming »
Daniel Roy · Josh Tenenbaum · Thomas Dietterich · Stuart J Russell · YI WU · Ulrik R Beierholm · Alp Kucukelbir · Zenna Tavares · Yura Perov · Daniel Lee · Brian Ruttenberg · Sameer Singh · Michael Hughes · Marco Gaboardi · Alexey Radul · Vikash Mansinghka · Frank Wood · Sebastian Riedel · Prakash Panangaden -
2014 Poster: Spectral Methods for Indian Buffet Process Inference »
Hsiao-Yu Tung · Alexander Smola -
2013 Workshop: Deep Learning »
Yoshua Bengio · Hugo Larochelle · Russ Salakhutdinov · Tomas Mikolov · Matthew D Zeiler · David Mcallester · Nando de Freitas · Josh Tenenbaum · Jian Zhou · Volodymyr Mnih -
2013 Poster: One-shot learning by inverting a compositional causal process »
Brenden M Lake · Russ Salakhutdinov · Josh Tenenbaum -
2013 Poster: Approximate Bayesian Image Interpretation using Generative Probabilistic Graphics Programs »
Vikash Mansinghka · Tejas D Kulkarni · Yura N Perov · Josh Tenenbaum -
2013 Oral: Approximate Bayesian Image Interpretation using Generative Probabilistic Graphics Programs »
Vikash Mansinghka · Tejas D Kulkarni · Yura N Perov · Josh Tenenbaum -
2011 Workshop: Challenges in Learning Hierarchical Models: Transfer Learning and Optimization »
Quoc V. Le · Marc'Aurelio Ranzato · Russ Salakhutdinov · Josh Tenenbaum · Andrew Y Ng -
2011 Poster: Learning to Learn with Compound HD Models »
Russ Salakhutdinov · Josh Tenenbaum · Antonio Torralba -
2011 Spotlight: Learning to Learn with Compound HD Models »
Russ Salakhutdinov · Josh Tenenbaum · Antonio Torralba -
2010 Workshop: Transfer Learning Via Rich Generative Models. »
Russ Salakhutdinov · Ryan Adams · Josh Tenenbaum · Zoubin Ghahramani · Tom Griffiths -
2010 Invited Talk: How to Grow a Mind: Statistics, Structure and Abstraction »
Josh Tenenbaum -
2010 Poster: Dynamic Infinite Relational Model for Time-varying Relational Data Analysis »
Katsuhiko Ishiguro · Tomoharu Iwata · Naonori Ueda · Josh Tenenbaum -
2010 Poster: Nonparametric Bayesian Policy Priors for Reinforcement Learning »
Finale P Doshi-Velez · David Wingate · Nicholas Roy · Josh Tenenbaum -
2009 Workshop: Bounded-rational analyses of human cognition: Bayesian models, approximate inference, and the brain »
Noah Goodman · Edward Vul · Tom Griffiths · Josh Tenenbaum -
2009 Workshop: Analyzing Networks and Learning With Graphs »
Edo M Airoldi · Jure Leskovec · Jon Kleinberg · Josh Tenenbaum -
2009 Poster: Perceptual Multistability as Markov Chain Monte Carlo Inference »
Samuel J Gershman · Edward Vul · Josh Tenenbaum -
2009 Poster: Help or Hinder: Bayesian Models of Social Goal Inference »
Tomer D Ullman · Chris L Baker · Owen Macindoe · Owain Evans · Noah Goodman · Josh Tenenbaum -
2009 Spotlight: Perceptual Multistability as Markov Chain Monte Carlo Inference »
Samuel J Gershman · Edward Vul · Josh Tenenbaum -
2009 Poster: Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model »
Edward Vul · Michael C Frank · George Alvarez · Josh Tenenbaum -
2009 Oral: Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model »
Edward Vul · Michael C Frank · George Alvarez · Josh Tenenbaum -
2009 Poster: Modelling Relational Data using Bayesian Clustered Tensor Factorization »
Ilya Sutskever · Russ Salakhutdinov · Josh Tenenbaum -
2008 Workshop: Probabilistic Programming: Universal Languages, Systems and Applications »
Daniel Roy · John Winn · David A McAllester · Vikash Mansinghka · Josh Tenenbaum -
2008 Workshop: Machine learning meets human learning »
Nathaniel D Daw · Tom Griffiths · Josh Tenenbaum · Jerry Zhu -
2007 Workshop: The Grammar of Vision: Probabilistic Grammar-Based Models for Visual Scene Understanding and Object Categorization »
Virginia Savova · Josh Tenenbaum · Leslie Kaelbling · Alan Yuille -
2007 Spotlight: A Bayesian Framework for Cross-Situational Word-Learning »
Michael C Frank · Noah Goodman · Josh Tenenbaum -
2007 Poster: A Bayesian Framework for Cross-Situational Word-Learning »
Michael C Frank · Noah Goodman · Josh Tenenbaum -
2007 Poster: A complexity measure for intuitive theories »
Charles Kemp · Noah Goodman · Josh Tenenbaum -
2006 Poster: Combining causal and similarity-based reasoning »
Charles Kemp · Patrick Shafto · Allison Berke · Josh Tenenbaum -
2006 Poster: Multiple timescales and uncertainty in motor adaptation »
Konrad P Kording · Josh Tenenbaum · Reza Shadmehr -
2006 Poster: Learning annotated hierarchies from relational data »
Daniel Roy · Charles Kemp · Vikash Mansinghka · Josh Tenenbaum -
2006 Talk: Learning annotated hierarchies from relational data »
Daniel Roy · Charles Kemp · Vikash Mansinghka · Josh Tenenbaum -
2006 Spotlight: Multiple timescales and uncertainty in motor adaptation »
Konrad P Kording · Josh Tenenbaum · Reza Shadmehr -
2006 Talk: Combining causal and similarity-based reasoning »
Charles Kemp · Patrick Shafto · Allison Berke · Josh Tenenbaum -
2006 Poster: Causal inference in sensorimotor integration »
Konrad P Kording · Josh Tenenbaum -
2006 Tutorial: Bayesian Models of Human Learning and Inference »
Josh Tenenbaum