Timezone: »
Sharing communication of autonomous robots with input from a human operator could facilitate data collection of robotic task demonstrations to continuously improve learned models. Yet, the means to communicate intent and reason about the future are disparate between humans and robots. Recent advancements in NLP with Transformers lend both insight and specific tools to tackle this. The self-attention mechanism in Transformers aims to holistically understand a sequence of words, rather than emphasizing adjacent connections. The same holds when Transformers are applied to robotic task trajectories: given an environment state and task goal, the model can quickly update its plan with new information at every step while maintaining holistic knowledge of the past. A key insight is that human intent can be injected at any location within the time sequence if the user decides that the model predicted actions are inappropriate. At every time step, the user can (1) do nothing and allow autonomous operation to continue while observing the robot’s future plan sequence, or (2) take over and momentarily prescribe a different set of actions to nudge the model back on track and let it continue autonomously from there onwards. Virtual reality (VR) offers an ideal ground to communicate these intents on a robot, and to accumulate knowledge from human demonstrations. We develop Assistive Tele-op, a VR system that allows users to collect robot task demonstrations with both a high success rate and with greater ease than manual teleoperation systems.
Author Information
Henry Clever (Georgia Tech)
Ankur Handa (Imperial College London)
Hammad Mazhar (NVIDIA)
Qian Wan (Nvidia)
Yashraj Narang (NVIDIA)
Maya Cakmak (University of Washington)
Dieter Fox (NVIDIA / University of Washington)
More from the Same Authors
-
2021 : Isaac Gym: High Performance GPU Based Physics Simulation For Robot Learning »
Viktor Makoviychuk · Lukasz Wawrzyniak · Yunrong Guo · Michelle Lu · Kier Storey · Miles Macklin · David Hoeller · Nikita Rudin · Arthur Allshire · Ankur Handa · Gavriel State -
2021 : Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World Trifinger »
Arthur Allshire · Mayank Mittal · Varun Lodaya · Viktor Makoviychuk · Denys Makoviichuk · Felix Widmaier · Manuel Wuethrich · Stefan Bauer · Ankur Handa · Animesh Garg -
2022 : Insights towards Sim2Real Contact-Rich Manipulation »
Michael Noseworthy · Iretiayo Akinola · Yashraj Narang · Fabio Ramos · Lucas Manuelli · Ankur Handa · Dieter Fox -
2022 : Insights towards Sim2Real Contact-Rich Manipulation »
Michael Noseworthy · Iretiayo Akinola · Yashraj Narang · Fabio Ramos · Lucas Manuelli · Ankur Handa · Dieter Fox -
2020 Poster: Causal Discovery in Physical Systems from Videos »
Yunzhu Li · Antonio Torralba · Anima Anandkumar · Dieter Fox · Animesh Garg -
2017 Workshop: Teaching Machines, Robots, and Humans »
Maya Cakmak · Anna Rafferty · Adish Singla · Jerry Zhu · Sandra Zilles