Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Audio

Unsupervised Musical Object Discovery from Audio

Joonsu Gha · Vincent Herrmann · Benjamin F. Grewe · Jürgen Schmidhuber · Anand Gopalakrishnan


Abstract:

Current object-centric learning models such as the popular SlotAttention architecture allow for unsupervised visual scene decomposition. Our novel MusicSlots method adapts SlotAttention to the audio domain, to achieve unsupervised music decomposition. Since concepts of opacity and occlusion in vision have no auditory analogues, the softmax normalization of alpha masks in the decoders of visual object-centric models is not well-suited for decomposing audio objects. MusicSlots overcomes this problem. We introduce a spectrogram-based multi-object music dataset tailored to evaluate object-centric learning on western tonal music. MusicSlots achieves good performance on unsupervised note discovery and outperforms several established baselines on supervised note property prediction tasks.

Chat is not available.