Skip to yearly menu bar Skip to main content

Spotlight Poster

Sounding Bodies: Modeling 3D Spatial Sound of Humans Using Body Pose and Audio

Xudong XU · Dejan Markovic · Jacob Sandakly · Todd Keebler · Steven Krenn · Alexander Richard

Great Hall & Hall B1+B2 (level 1) #417
[ ]
[ Paper [ Poster [ OpenReview
Thu 14 Dec 3 p.m. PST — 5 p.m. PST


While 3D human body modeling has received much attention in computer vision, modeling the acoustic equivalent, i.e. modeling 3D spatial audio produced by body motion and speech, has fallen short in the community. To close this gap, we present a model that can generate accurate 3D spatial audio for full human bodies. The system consumes, as input, audio signals from headset microphones and body pose, and produces, as output, a 3D sound field surrounding the transmitter's body, from which spatial audio can be rendered at any arbitrary position in the 3D space. We collect a first-of-its-kind multimodal dataset of human bodies, recorded with multiple cameras and a spherical array of 345 microphones. In an empirical evaluation, we demonstrate that our model can produce accurate body-induced sound fields when trained with a suitable loss. Dataset and code are available online.

Chat is not available.