Timezone: »

Learning Semantics-Aware Locomotion Skills from Human Demonstrations
Yuxiang Yang · Xiangyun Meng · Wenhao Yu · Tingnan Zhang · Jie Tan · Byron Boots
Event URL: https://openreview.net/forum?id=YuFCeo1JsqK »

The semantics of the environment, such as the terrain types and properties, reveal important information for legged robots to adjust their behaviors. In this work, we present a framework that uses semantic information from RGB images to adjust the speeds and gaits for quadrupedal robots, such that the robot can traversethrough complex offroad terrains. Due to the lack of high-fidelity offroad simulation, our framework needs to be trained directly in the real world, which brings unique challenges in sample efficiency and safety. To ensure sample efficiency, we pre-train the perception model on an off-road driving dataset. To avoid the risks of real-world policy exploration, we leverage human demonstration to train a speed policy that selects a desired forward speed from camera image. For maximum traversability, we pair the speed policy with a gait selector, which selects a robust locomotion gait for each forward speed. Using only 40 minutes of human demonstration data, our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km safely and efficiently.

Author Information

Yuxiang Yang (Department of Computer Science, University of Washington)
Xiangyun Meng (University of Washington)
Wenhao Yu (Google)
Tingnan Zhang (Google)
Jie Tan (Google)
Byron Boots (University of Washington)

More from the Same Authors