Timezone: »
Research on Inverse Reinforcement Learning (IRL) from third-person videos has shown encouraging results on removing the need for manual reward design for robotic tasks. However, most prior works are still limited by training from a relatively restricted domain of videos. In this paper, we argue that the true potential of third-person IRL lies in increasing the diversity of videos for better scaling. To learn a reward function from diverse videos, we propose to perform graph abstraction on the videos followed by temporal matching in the graph space to measure the task progress. Our insight is that a task can be described by entity interactions that form a graph, and this graph abstraction can help remove irrelevant information such as textures, resulting in more robust reward functions. We evaluate our approach, GraphIRL, on cross-embodiment learning in X-MAGICAL and learning from human demonstrations for real-robot manipulation. We show significant improvements in robustness to diverse video demonstrations over previous approaches, and even achieve better results than manual reward design on a real robot pushing task. Videos are available at https://graphirl.github.io/.
Author Information
Sateesh Kumar (University of California, San Diego)
Jonathan Zamora (University of California, San Diego)
Nicklas Hansen (UC San Diego)
Rishabh Jangir (None)
Xiaolong Wang (UC San Diego)
More from the Same Authors
-
2021 : Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Huazhe Xu · Xiaolong Wang -
2021 : Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation »
Rishabh Jangir · Nicklas Hansen · Xiaolong Wang -
2021 : Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Huazhe Xu · Xiaolong Wang -
2021 : Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Huazhe Xu · Xiaolong Wang -
2021 : Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay Randomization »
Chieko Imai · Minghao Zhang · Ruihan Yang · Yuzhe Qin · Xiaolong Wang -
2021 : Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation »
Rishabh Jangir · Nicklas Hansen · Mohit Jain · Xiaolong Wang -
2022 : On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning »
yifan xu · Nicklas Hansen · Zirui Wang · Yung-Chieh Chan · Hao Su · Zhuowen Tu -
2022 : Category-Level 6D Object Pose Estimation in the Wild: A Semi-Supervised Learning Approach and A New Dataset »
Yang Fu · Xiaolong Wang -
2022 : Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation »
Yuzhe Qin · Binghao Huang · Zhao-Heng Yin · Hao Su · Xiaolong Wang -
2022 : Visual Reinforcement Learning with Self-Supervised 3D Representations »
Yanjie Ze · Nicklas Hansen · Yinbo Chen · Mohit Jain · Xiaolong Wang -
2022 : MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations »
Nicklas Hansen · Yixin Lin · Hao Su · Xiaolong Wang · Vikash Kumar · Aravind Rajeswaran -
2022 : On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning »
yifan xu · Nicklas Hansen · Zirui Wang · Yung-Chieh Chan · Hao Su · Zhuowen Tu -
2023 Poster: H-InDex: Visual Reinforcement Learning with Hand-Informed Representations for Dexterous Manipulation »
Yanjie Ze · Yuyao Liu · Ruizhe Shi · Jiaxin Qin · Zhecheng Yuan · Jiashun Wang · Xiaolong Wang · Huazhe Xu -
2023 Poster: Elastic Decision Transformer »
Yueh-Hua Wu · Xiaolong Wang · Masashi Hamaya -
2023 Poster: RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization »
Zhecheng Yuan · Sizhe Yang · Pu Hua · Can Chang · Kaizhe Hu · Xiaolong Wang · Huazhe Xu -
2022 Workshop: Self-Supervised Learning: Theory and Practice »
Ishan Misra · Pengtao Xie · Gul Varol · Yale Song · Yuki Asano · Xiaolong Wang · Pauline Luc -
2022 Poster: Category-Level 6D Object Pose Estimation in the Wild: A Semi-Supervised Learning Approach and A New Dataset »
Yang Fu · Xiaolong Wang -
2021 Poster: Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2020 Poster: Online Adaptation for Consistent Mesh Reconstruction in the Wild »
Xueting Li · Sifei Liu · Shalini De Mello · Kihwan Kim · Xiaolong Wang · Ming-Hsuan Yang · Jan Kautz -
2020 Poster: Multi-Task Reinforcement Learning with Soft Modularization »
Ruihan Yang · Huazhe Xu · YI WU · Xiaolong Wang