Timezone: »

Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation
Moitreya Chatterjee · Narendra Ahuja · Anoop Cherian

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #926

There exists an unequivocal distinction between the sound produced by a static source and that produced by a moving one, especially when the source moves towards or away from the microphone. In this paper, we propose to use this connection between audio and visual dynamics for solving two challenging tasks simultaneously, namely: (i) separating audio sources from a mixture using visual cues, and (ii) predicting the 3D visual motion of a sounding source using its separated audio. Towards this end, we present Audio Separator and Motion Predictor (ASMP) -- a deep learning framework that leverages the 3D structure of the scene and the motion of sound sources for better audio source separation. At the heart of ASMP is a 2.5D scene graph capturing various objects in the video and their pseudo-3D spatial proximities. This graph is constructed by registering together 2.5D monocular depth predictions from the 2D video frames and associating the 2.5D scene regions with the outputs of an object detector applied on those frames. The ASMP task is then mathematically modeled as the joint problem of: (i) recursively segmenting the 2.5D scene graph into several sub-graphs, each associated with a constituent sound in the input audio mixture (which is then separated) and (ii) predicting the 3D motions of the corresponding sound sources from the separated audio. To empirically evaluate ASMP, we present experiments on two challenging audio-visual datasets, viz. Audio Separation in the Wild (ASIW) and Audio Visual Event (AVE). Our results demonstrate that ASMP achieves a clear improvement in source separation quality, outperforming prior works on both datasets, while also estimating the direction of motion of the sound sources better than other methods.

Author Information

Moitreya Chatterjee (Mitsubishi Electric Research Laboratories)

Moitreya is currently a Research Scientist at Mitsubishi Electric Research Laboratories, having recently graduated with a PhD degree from UIUC (University of Illinois Urbana Champaign). His research interests primarily span the field of Computer Vision, Audio-Visual Understanding, and the application of Machine Learning to problems in this space.

Narendra Ahuja (University of Illinois at Urbana-Champaign)
Anoop Cherian (MERL)

More from the Same Authors