Timezone: »
We introduce a novel single-camera teleoperation system for learning dexterous manipulation. Our system allows human operators to collect 3D demonstrations efficiently with only an iPad and a computer. These demonstrations are then used for imitation learning on complex multi-finger robot hand manipulation tasks. One key contribution of our system is that we construct a customized robot hand for each user in the physical simulator, which is a manipulator resembling the same kinematics structure and shape of the operator's hand. This not only avoids unstable human-robot hand retargetting during data collection, but also provides a more intuitive and personalized interface for different users to operate on. Once the data collection is done, the customized robot hand trajectories can be converted to different specified robot hands (models that are manufactured and commercialized) to generate training demonstrations. Using the data collected on the customized hand, our imitation learning results show large improvement over pure RL on multiple specified robot hands.
Author Information
Yuzhe Qin (University of California, San Diego, University of California, San Diego)
Hao Su (Stanford)
Xiaolong Wang (UC San Diego)
More from the Same Authors
-
2021 : ManiSkill: Generalizable Manipulation Skill Benchmark with Large-Scale Demonstrations »
Tongzhou Mu · Zhan Ling · Fanbo Xiang · Derek Yang · Xuanlin Li · Stone Tao · Zhiao Huang · Zhiwei Jia · Hao Su -
2021 : Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay Randomization »
Minghao Zhang · Ruihan Yang · Yuzhe Qin · Xiaolong Wang -
2021 : Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Huazhe Xu · Xiaolong Wang -
2021 : Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation »
Rishabh Jangir · Nicklas Hansen · Xiaolong Wang -
2021 : Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay Randomization »
Chieko Imai · Minghao Zhang · Ruihan Yang · Yuzhe Qin · Xiaolong Wang -
2022 : Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation »
Yuzhe Qin · Binghao Huang · Zhao-Heng Yin · Hao Su · Xiaolong Wang -
2022 : Abstract-to-Executable Trajectory Translation for One-Shot Task Generalization »
Stone Tao · Xiaochen Li · Tongzhou Mu · Zhiao Huang · Yuzhe Qin · Hao Su -
2021 Poster: Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2021 Poster: Multi-Person 3D Motion Prediction with Multi-Range Transformers »
Jiashun Wang · Huazhe Xu · Medhini Narasimhan · Xiaolong Wang -
2021 Poster: NovelD: A Simple yet Effective Exploration Criterion »
Tianjun Zhang · Huazhe Xu · Xiaolong Wang · Yi Wu · Kurt Keutzer · Joseph Gonzalez · Yuandong Tian -
2021 Poster: Particle Cloud Generation with Message Passing Generative Adversarial Networks »
Raghav Kansal · Javier Duarte · Hao Su · Breno Orzari · Thiago Tomei · Maurizio Pierini · Mary Touranakou · jean-roch vlimant · Dimitrios Gunopulos -
2021 Poster: Test-Time Personalization with a Transformer for Human Pose Estimation »
Yizhuo Li · Miao Hao · Zonglin Di · Nitesh Bharadwaj Gundavarapu · Xiaolong Wang -
2017 Poster: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space »
Charles Ruizhongtai Qi · Li Yi · Hao Su · Leonidas Guibas -
2014 Poster: Deep Joint Task Learning for Generic Object Extraction »
Xiaolong Wang · Liliang Zhang · Liang Lin · Zhujin Liang · Wangmeng Zuo -
2012 Poster: Dynamical And-Or Graph Learning for Object Shape Modeling and Detection »
Xiaolong Wang · Liang Lin