Learning and transforming sound for interactive musical applications
in
Workshop: Machine Learning for Audio Signal Processing (ML4Audio)
Abstract
Recent developments in object recognition (especially convolutional neural networks) led to a new spectacular application: image style transfer. But what would be the music version of style transfer? In the flow-machine project, we created diverse tools for generating audio tracks by transforming prerecorded music material. Our artists integrated these tools in their composition process and produced some pop tracks. I present some of those tools, with audio examples, and give an operative definition of music style transfer as an optimization problem. Such definition allows for an efficient solution which renders possible a multitude of musical applications: from composing to live performance.
Marco Marchini works at Spotify in the Creator Technology Research Lab, Paris. His mission is bridging the gap between between creative artists and intelligent technologies. Previously, he worked as research assistant for the Pierre-and-Marie-Curie University at the Sony Computer Science Laboratory of Paris and worked for the Flow Machine project. His previous academic research also includes unsupervised music generation and ensemble performance analysis, this research was carried out during my M.Sc. and Ph.D. at the Music Technology Group (DTIC, Pompeu Fabra University). He has a double degree in Mathematics from Bologna University.