Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 6th Robot Learning Workshop: Pretraining, Fine-Tuning, and Generalization with Large Scale Models

Human Scene Transformer

Tim Salzmann · Hao-Tien Lewis Chiang · Markus Ryll · Dorsa Sadigh · Carolina Parada · Alex Bewley

Keywords: [ trajectory prediction ] [ Multi-modal Robotics ] [ Service Robotics ] [ transformer ]


Abstract:

In this work, we present a human-centric scene transformer to predict human future trajectories from input features including human positions, and 3D skeletal keypoints from onboard in-the-wild robot sensory information. The resulting model captures the inherent uncertainty for future human trajectory prediction and achieves state-of-the-art performance on common prediction benchmarks and a human tracking dataset captured from a mobile robot. Furthermore, we identify agents with limited historical data as a major contributor to error where our approach leverages multi-modal data to provide a error reduction of up-to 11\%.

Chat is not available.