Skip to yearly menu bar Skip to main content


Invited talk
in
Workshop: Machine Learning for Audio

Lark: A Multimodal Foundation Model for Music

Rachel Bittner

[ ]
Sat 16 Dec 9:30 a.m. PST — 10 a.m. PST

Abstract:

Music has a unique and complex structure which is challenging for both expert humans and existing AI systems to understand, and presents unique challenges relative to other forms of audio. We present LLARK, an instruction-tuned multimodal model for music understanding. We detail our process for dataset creation, which involves augmenting the annotations of diverse open-source music datasets and converting them to a unified instruction-tuning format. We propose a multimodal architecture for LLARK, integrating a pretrained generative model for music with a pretrained language model. In evaluations on three types of tasks (music understanding, captioning, and reasoning), we show that our model matches or outperforms existing baselines in zero-shot generalization for music understanding, and that humans show a high degree of agreement with the model’s responses in captioning and reasoning tasks.

Chat is not available.