Timezone: »
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation
Rishabh Jangir · Nicklas Hansen · Xiaolong Wang
Event URL: https://openreview.net/forum?id=EwfMxc21cfc »
Learning to solve precision-based manipulation tasks from visual feedback using Reinforcement Learning (RL) could drastically reduce the engineering efforts required by traditional robot systems. However, performing fine-grained motor control from visual inputs alone is challenging, especially with a static third-person camera as often used in previous work. We propose a setting for robotic manipulation in which the agent receives visual feedback from both a third-person camera and an egocentric camera mounted on the robot's wrist. While the third-person camera is static, the egocentric camera enables the robot to actively control its vision to aid in precise manipulation. To fuse visual information from both cameras effectively, we additionally propose to use Transformers with a cross-view attention mechanism that models spatial attention from one view to another (and vice-versa), and use the learned features as input to an RL policy. Our method improves learning over strong single-view and multi-view baselines, and successfully transfers to a set of challenging manipulation tasks on a real robot with uncalibrated cameras, no access to state information, and a high degree of task variability. In a hammer manipulation task, our method succeeds in $75\%$ of trials versus $38\%$ and $13\%$ for multi-view and single-view baselines, respectively.
Learning to solve precision-based manipulation tasks from visual feedback using Reinforcement Learning (RL) could drastically reduce the engineering efforts required by traditional robot systems. However, performing fine-grained motor control from visual inputs alone is challenging, especially with a static third-person camera as often used in previous work. We propose a setting for robotic manipulation in which the agent receives visual feedback from both a third-person camera and an egocentric camera mounted on the robot's wrist. While the third-person camera is static, the egocentric camera enables the robot to actively control its vision to aid in precise manipulation. To fuse visual information from both cameras effectively, we additionally propose to use Transformers with a cross-view attention mechanism that models spatial attention from one view to another (and vice-versa), and use the learned features as input to an RL policy. Our method improves learning over strong single-view and multi-view baselines, and successfully transfers to a set of challenging manipulation tasks on a real robot with uncalibrated cameras, no access to state information, and a high degree of task variability. In a hammer manipulation task, our method succeeds in $75\%$ of trials versus $38\%$ and $13\%$ for multi-view and single-view baselines, respectively.
Author Information
Rishabh Jangir (None)
Nicklas Hansen (UC San Diego)
Xiaolong Wang (UC San Diego)
More from the Same Authors
-
2021 : From One Hand to Multiple Hands: Imitation Learning for Dexterous Manipulation from Single-Camera Teleoperation »
Yuzhe Qin · Hao Su · Xiaolong Wang -
2021 : Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay Randomization »
Minghao Zhang · Ruihan Yang · Yuzhe Qin · Xiaolong Wang -
2021 : Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Huazhe Xu · Xiaolong Wang -
2021 : Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Huazhe Xu · Xiaolong Wang -
2021 : Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Huazhe Xu · Xiaolong Wang -
2021 : Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation »
Rishabh Jangir · Nicklas Hansen · Mohit Jain · Xiaolong Wang -
2022 : On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning »
yifan xu · Nicklas Hansen · Zirui Wang · Yung-Chieh Chan · Hao Su · Zhuowen Tu -
2022 : Visual Reinforcement Learning with Self-Supervised 3D Representations »
Yanjie Ze · Nicklas Hansen · Yinbo Chen · Mohit Jain · Xiaolong Wang -
2022 : MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations »
Nicklas Hansen · Yixin Lin · Hao Su · Xiaolong Wang · Vikash Kumar · Aravind Rajeswaran -
2022 : Graph Inverse Reinforcement Learning from Diverse Videos »
Sateesh Kumar · Jonathan Zamora · Nicklas Hansen · Rishabh Jangir · Xiaolong Wang -
2022 : On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning »
yifan xu · Nicklas Hansen · Zirui Wang · Yung-Chieh Chan · Hao Su · Zhuowen Tu -
2021 Poster: Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2021 Poster: Multi-Person 3D Motion Prediction with Multi-Range Transformers »
Jiashun Wang · Huazhe Xu · Medhini Narasimhan · Xiaolong Wang -
2021 Poster: NovelD: A Simple yet Effective Exploration Criterion »
Tianjun Zhang · Huazhe Xu · Xiaolong Wang · Yi Wu · Kurt Keutzer · Joseph Gonzalez · Yuandong Tian -
2021 Poster: Test-Time Personalization with a Transformer for Human Pose Estimation »
Yizhuo Li · Miao Hao · Zonglin Di · Nitesh Bharadwaj Gundavarapu · Xiaolong Wang -
2014 Poster: Deep Joint Task Learning for Generic Object Extraction »
Xiaolong Wang · Liliang Zhang · Liang Lin · Zhujin Liang · Wangmeng Zuo -
2012 Poster: Dynamical And-Or Graph Learning for Object Shape Modeling and Detection »
Xiaolong Wang · Liang Lin