Skip to yearly menu bar Skip to main content


Weakly-Supervised Audio-Visual Segmentation

Shentong Mo · Shentong Mo · Bhiksha Raj

Great Hall & Hall B1+B2 (level 1) #1914
[ ]
Thu 14 Dec 3 p.m. PST — 5 p.m. PST

Abstract: Audio-visual segmentation is a challenging task that aims to predict pixel-level masks for sound sources in a video. Previous work applied a comprehensive manually designed architecture with countless pixel-wise accurate masks as supervision. However, these pixel-level masks are expensive and not available in all cases. In this work, we aim to simplify the supervision as the instance-level annotation, $\textit{i.e.}$, weakly-supervised audio-visual segmentation. We present a novel Weakly-Supervised Audio-Visual Segmentation framework, namely WS-AVS, that can learn multi-scale audio-visual alignment with multi-scale multiple-instance contrastive learning for audio-visual segmentation. Extensive experiments on AVSBench demonstrate the effectiveness of our WS-AVS in the weakly-supervised audio-visual segmentation of single-source and multi-source scenarios.

Chat is not available.