Timezone: »
We find that across a wide range of robot policy learning scenarios, treating supervised policy learning with an implicit model generally performs better, on average, than commonly used explicit models. We present extensive experiments on this finding, and we provide both intuitive insight and theoretical arguments distinguishing the properties of implicit models compared to their explicit counterparts, particularly with respect to approximating complex, potentially discontinuous and multi-valued (set-valued) functions. On robotic policy learning tasks we show that implicit behavioral cloning policies with energy-based models (EBM) often outperform common explicit (Mean Square Error, or Mixture Density) counterparts, including on tasks with high-dimensional action spaces and visual image inputs. We find these policies provide competitive results or outperform state-of-the-art offline reinforcement learning methods on the challenging human-expert tasks from the D4RL benchmark suite, despite using no reward information. In the real world, robots with implicit policies can learn complex and remarkably subtle behaviors on contact-rich tasks from human demonstrations, including tasks with high combinatorial complexity and tasks requiring 1mm precision.
Author Information
Pete Florence (Robotics at Google)
Corey Lynch (Google Brain)
Andy Zeng (Google)
Oscar Ramirez (Google Brain)
Ayzaan Wahid (Google)
Laura Downs (Google)
Adrian Wong (Google)
Igor Mordatch (Research, Google)
Jonathan Tompson (Google Brain)
More from the Same Authors
-
2021 : Implicit Behavioral Cloning »
Pete Florence · Corey Lynch · Andy Zeng · Oscar Ramirez · Ayzaan Wahid · Laura Downs · Adrian Wong · Igor Mordatch · Jonathan Tompson -
2021 : Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions »
Bogdan Mazoure · Ilya Kostrikov · Ofir Nachum · Jonathan Tompson -
2022 : Multi-Environment Pretraining Enables Transfer to Action Limited Datasets »
David Venuto · Mengjiao (Sherry) Yang · Pieter Abbeel · Doina Precup · Igor Mordatch · Ofir Nachum -
2022 : Skill Acquisition by Instruction Augmentation on Offline Datasets »
Ted Xiao · Harris Chan · Pierre Sermanet · Ayzaan Wahid · Anthony Brohan · Karol Hausman · Sergey Levine · Jonathan Tompson -
2022 : Interactive Language: Talking to Robots in Real Time »
Corey Lynch · Pete Florence · Jonathan Tompson · Ayzaan Wahid · Tianli Ding · James Betker · Robert Baruch · Travis Armstrong -
2022 : Robotic Skill Acquistion via Instruction Augmentation with Vision-Language Models »
Ted Xiao · Harris Chan · Pierre Sermanet · Ayzaan Wahid · Anthony Brohan · Karol Hausman · Sergey Levine · Jonathan Tompson -
2022 : Contrastive Value Learning: Implicit Models for Simple Offline RL »
Bogdan Mazoure · Benjamin Eysenbach · Ofir Nachum · Jonathan Tompson -
2022 : Implicit Offline Reinforcement Learning via Supervised Learning »
Alexandre Piche · Rafael Pardinas · David Vazquez · Igor Mordatch · Igor Mordatch · Chris Pal -
2022 : Interactive Language: Talking to Robots in Real Time »
Corey Lynch · Pete Florence · Jonathan Tompson · Ayzaan Wahid · Tianli Ding · James Betker · Robert Baruch · Travis Armstrong -
2022 : Robotic Skill Acquistion via Instruction Augmentation with Vision-Language Models »
Ted Xiao · Harris Chan · Pierre Sermanet · Ayzaan Wahid · Anthony Brohan · Karol Hausman · Sergey Levine · Jonathan Tompson -
2022 : Panel: Scaling & Models (Q&A 2) »
Andy Zeng · Haoran Tang · Karol Hausman · Jackie Kay · Gabriel Barth-Maron -
2022 Workshop: 5th Robot Learning Workshop: Trustworthy Robotics »
Alex Bewley · Roberto Calandra · Anca Dragan · Igor Gilitschenski · Emily Hannigan · Masha Itkina · Hamidreza Kasaei · Jens Kober · Danica Kragic · Nathan Lambert · Julien PEREZ · Fabio Ramos · Ransalu Senanayake · Jonathan Tompson · Vincent Vanhoucke · Markus Wulfmeier -
2022 : Speaker Andy Zeng »
Andy Zeng -
2022 Poster: Improving Zero-Shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions »
Bogdan Mazoure · Ilya Kostrikov · Ofir Nachum · Jonathan Tompson -
2021 : Implicit Behavioral Cloning »
Pete Florence · Corey Lynch · Andy Zeng · Oscar Ramirez · Ayzaan Wahid · Laura Downs · Adrian Wong · Igor Mordatch · Jonathan Tompson -
2020 : Discussion Panel »
Pete Florence · Dorsa Sadigh · Carolina Parada · Jeannette Bohg · Roberto Calandra · Peter Stone · Fabio Ramos -
2020 : Invited Talk - "Object- and Action-Centric Representational Robot Learning" »
Pete Florence · Daniel Seita -
2019 : Poster Presentations »
Rahul Mehta · Andrew Lampinen · Binghong Chen · Sergio Pascual-Diaz · Jordi Grau-Moya · Aldo Faisal · Jonathan Tompson · Yiren Lu · Khimya Khetarpal · Martin Klissarov · Pierre-Luc Bacon · Doina Precup · Thanard Kurutach · Aviv Tamar · Pieter Abbeel · Jinke He · Maximilian Igl · Shimon Whiteson · Wendelin Boehmer · RaphaĆ«l Marinier · Olivier Pietquin · Karol Hausman · Sergey Levine · Chelsea Finn · Tianhe Yu · Lisa Lee · Benjamin Eysenbach · Emilio Parisotto · Eric Xing · Ruslan Salakhutdinov · Hongyu Ren · Anima Anandkumar · Deepak Pathak · Christopher Lu · Trevor Darrell · Alexei Efros · Phillip Isola · Feng Liu · Bo Han · Gang Niu · Masashi Sugiyama · Saurabh Kumar · Janith Petangoda · Johan Ferret · James McClelland · Kara Liu · Animesh Garg · Robert Lange -
2019 Poster: Wasserstein Dependency Measure for Representation Learning »
Sherjil Ozair · Corey Lynch · Yoshua Bengio · Aaron van den Oord · Sergey Levine · Pierre Sermanet -
2018 Poster: Discovery of Latent 3D Keypoints via End-to-end Geometric Reasoning »
Supasorn Suwajanakorn · Noah Snavely · Jonathan Tompson · Mohammad Norouzi -
2018 Oral: Discovery of Latent 3D Keypoints via End-to-end Geometric Reasoning »
Supasorn Suwajanakorn · Noah Snavely · Jonathan Tompson · Mohammad Norouzi