Timezone: »
Learning diverse dexterous manipulation behaviors with assorted objects remains an open grand challenge. While policy learning methods offer a powerful avenue to attack this problem, they require extensive per-task engineering and algorithmic tuning. This paper seeks to escape these constraints, by developing a Pre-Grasp informed Dexterous Manipulation (PGDM) framework that generates diverse dexterous manipulation behaviors, without any task-specific reasoning or hyper-parameter tuning. At the core of PGDM is a well known robotics construct, pre-grasps (i.e. the hand-pose preparing for object interaction). This simple primitive is enough to induce efficient exploration strategies for acquiring complex dexterous manipulation behaviors. To exhaustively verify these claims, we introduce TCDM, a benchmark of 50 diverse manipulation tasks defined over multiple objects and dexterous manipulators. Tasks for TCDM are defined automatically using exem-plar object trajectories from various sources (animators, human behaviors, etc.), without any per-task engineering and/or supervision. Our experiments validate that PGDM’s exploration strategy, induced by a surprisingly simple ingredient (single pre-grasp pose), matches the performance of prior methods, which require expen-sive per-task feature/reward engineering, expert supervision, and hyper-parameter tuning. For animated visualizations, trained policies, and project code, please refer to https://sites.google.com/view/pregrasp/.
Author Information
Sudeep Dasari (Carnegie Mellon University)
Vikash Kumar (FAIR, Meta-AI)

I am currently a research scientist at Facebook AI Research (FAIR). I have also spent some time at Google-Brain, OpenAI and Berkeley Artificial Intelligence Research (BAIR) Lab. I did my PhD at CSE, University of Washington's Movement Control Lab, under the supervision of Prof. Emanuel Todorov and Prof. Sergey Levine. I am interested in the areas of Robotics, and Embodied Artificial Intelligence. My general interest lies in developing artificial agents that are cheap, portable and exhibit complex behaviors.
More from the Same Authors
-
2022 : Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training »
Jason Yecheng Ma · Shagun Sodhani · Dinesh Jayaraman · Osbert Bastani · Vikash Kumar · Amy Zhang -
2022 : Real World Offline Reinforcement Learning with Realistic Data Source »
Gaoyue Zhou · Liyiming Ke · Siddhartha Srinivasa · Abhinav Gupta · Aravind Rajeswaran · Vikash Kumar -
2022 : Offline Reinforcement Learning on Real Robot with Realistic Data Sources »
Gaoyue Zhou · Liyiming Ke · Siddhartha Srinivasa · Abhinav Gupta · Aravind Rajeswaran · Vikash Kumar -
2022 : Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training »
Jason Yecheng Ma · Shagun Sodhani · Dinesh Jayaraman · Osbert Bastani · Vikash Kumar · Amy Zhang -
2022 : Fifteen-minute Competition Overview Video »
Guillaume Durandau · Yuval Tassa · Vittorio Caggiano · Vikash Kumar · Seungmoon Song · Massimo Sartori · -
2022 : Offline Reinforcement Learning on Real Robot with Realistic Data Sources »
Gaoyue Zhou · Liyiming Ke · Siddhartha Srinivasa · Abhinav Gupta · Aravind Rajeswaran · Vikash Kumar -
2022 : Policy Architectures for Compositional Generalization in Control »
Allan Zhou · Vikash Kumar · Chelsea Finn · Aravind Rajeswaran -
2022 : MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations »
Nicklas Hansen · Yixin Lin · Hao Su · Xiaolong Wang · Vikash Kumar · Aravind Rajeswaran -
2022 : Real World Offline Reinforcement Learning with Realistic Data Source »
Gaoyue Zhou · Liyiming Ke · Siddhartha Srinivasa · Abhinav Gupta · Aravind Rajeswaran · Vikash Kumar -
2023 Poster: RoboHive: A Unified Framework for Robot Learning »
Vikash Kumar · Rutav Shah · Gaoyue Zhou · Vincent Moens · Vittorio Caggiano · Abhishek Gupta · Aravind Rajeswaran -
2023 Competition: Train Offline, Test Online: A Democratized Robotics Benchmark »
Victoria Dean · Gaoyue Zhou · Mohan Kumar Srirama · Sudeep Dasari · Esther Brown · Marion Lepert · Paul Ruvolo · Chelsea Finn · Lerrel Pinto · Abhinav Gupta -
2023 Competition: MyoChallenge 2023: Towards Human-Level Dexterity and Agility »
Vittorio Caggiano · · Guillaume Durandau · Seungmoon Song · Cameron Berg · Pierre Schumacher · Chun Kwang Tan · Massimo Sartori · Vikash Kumar -
2022 : Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training »
Jason Yecheng Ma · Shagun Sodhani · Dinesh Jayaraman · Osbert Bastani · Vikash Kumar · Amy Zhang -
2022 Competition: MyoChallenge: Learning contact-rich manipulation using a musculoskeletal hand »
Vittorio Caggiano · · Guillaume Durandau · Seungmoon Song · Yuval Tassa · Massimo Sartori · Vikash Kumar