Demonstrating Singing accompaniment capabilities for MuseControlLite
Fang-Duo Tsai · Yi-Hsuan Yang
Abstract
In this demo, we extend our previous work, MuseControlLite, which represents the state-of-the-art approach for controlling time-varying conditions in text-to-music models, to the task of singing accompaniment generation. Given a vocal track with MIDI and audio, MuseControlLite generates a corresponding backing track by conditioning on extracted melody, rhythm, structure, along with the local key information. This enables the system to produce musically coherent accompaniments that align with the input vocals. The demo is publicly available at: https://musecontrollite.github.io/web/.
Chat is not available.
Successful Page Load