Skip to yearly menu bar Skip to main content


Poster

Text to Blind Motion

Hee Jae Kim · Kathakoli Sengupta · Masaki Kuribayashi · Hernisa Kacorri · Eshed Ohn-Bar

West Ballroom A-D #6903
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

People who are blind perceive the world differently than those who are sighted. This often translates to different motion characteristics; for instance, when crossing at an intersection, blind individuals may move in ways that could potentially be more dangerous, e.g., exhibit higher veering from the path and employ touch-based exploration around curbs and obstacles that may seem unpredictable. Yet, the ability of 3D motion models to model such behavior has not been previously studied, as existing datasets for 3D human motion currently lack diversity and are biased toward people who are sighted. In this work, we introduce BlindWays, the first multimodal motion benchmark for pedestrians who are blind. We collect 3D motion data using wearable sensors with 11 blind participants navigating eight different routes in a real-world urban setting. Additionally, we provide rich textual descriptions that capture the distinctive movement characteristics of blind pedestrians and their interactions with both the navigation aid (e.g., a white cane or a guide dog) and the environment. We benchmark state-of-the-art 3D human prediction models, finding poor performance with off-the-shelf and pre-training-based methods for our novel task. To contribute toward safer and more reliable autonomous systems that reason over diverse human movements in their environments, we will publicly release our novel text-and-motion benchmark.

Live content is unavailable. Log in and register to view live content