`

Timezone: »

 
Workshop
Machine Learning for Audio Signal Processing (ML4Audio)
Hendrik Purwins · Bob L. Sturm · Mark Plumbley

Fri Dec 08 08:00 AM -- 06:30 PM (PST) @ 201 A
Event URL: http://media.aau.dk/smc/ml4audio »

Abstracts and full papers: http://media.aau.dk/smc/ml4audio/

Audio signal processing is currently undergoing a paradigm change, where data-driven machine learning is replacing hand-crafted feature design. This has led some to ask whether audio signal processing is still useful in the "era of machine learning." There are many challenges, new and old, including the interpretation of learned models in high dimensional spaces, problems associated with data-poor domains, adversarial examples, high computational requirements, and research driven by companies using large in-house datasets that is ultimately not reproducible.

ML4Audio aims to promote progress, systematization, understanding, and convergence of applying machine learning in the area of audio signal processing. Specifically, we are interested in work that demonstrates novel applications of machine learning techniques to audio data, as well as methodological considerations of merging machine learning with audio signal processing. We seek contributions in, but not limited to, the following topics:
- audio information retrieval using machine learning;
- audio synthesis with given contextual or musical constraints using machine learning;
- audio source separation using machine learning;
- audio transformations (e.g., sound morphing, style transfer) using machine learning;
- unsupervised learning, online learning, one-shot learning, reinforcement learning, and incremental learning for audio;
- applications/optimization of generative adversarial networks for audio;
- cognitively inspired machine learning models of sound cognition;
- mathematical foundations of machine learning for audio signal processing.

This workshop especially targets researchers, developers and musicians in academia and industry in the area of MIR, audio processing, hearing instruments, speech processing, musical HCI, musicology, music technology, music entertainment, and composition.

ML4Audio Organisation Committee:
Hendrik Purwins, Aalborg University Copenhagen, Denmark (hpu@create.aau.dk)
Bob L. Sturm, Queen Mary University of London, UK (b.sturm@qmul.ac.uk)
Mark Plumbley, University of Surrey, UK (m.plumbley@surrey.ac.uk)

Program Committee:
Abeer Alwan (University of California, Los Angeles)
Jon Barker (University of Sheffield)
Sebastian Böck (Johannes Kepler University Linz)
Mads Græsbøll Christensen (Aalborg University)
Maximo Cobos (Universitat de Valencia)
Sander Dieleman (Google DeepMind)
Monika Dörfler (University of Vienna)
Shlomo Dubnov (UC San Diego)
Philippe Esling (IRCAM)
Cédric Févotte (IRIT)
Emilia Gómez (Universitat Pompeu Fabra)
Emanuël Habets (International Audio Labs Erlangen)
Jan Larsen (Danish Technical University)
Marco Marchini (Spotify)
Rafael Ramirez (Universitat Pompeu Fabra)
Gaël Richard (TELECOM ParisTech)
Fatemeh Saki (UT Dallas)
Sanjeev Satheesh (Baidu SVAIL)
Jan Schlüter (Austrian Research Institute for Artificial Intelligence)
Joan Serrà (Telefonica)
Malcolm Slaney (Google)
Emmanuel Vincent (INRIA Nancy)
Gerhard Widmer (Austrian Research Institute for Artificial Intelligence)
Tao Zhang (Starkey Hearing Technologies)

Author Information

Hendrik Purwins (Aalborg University Copenhagen)

I am currently Associate Professor at the Audio Analysis Lab, at Aalborg University Copenhagen. Before that, I had been Assistant Professor at the same University. Previously, I had been researcher at the Neurotechnology and Machine Learning Groups at Berlin Institute of Technology/Berlin Brain Computer Interface. Previously I was lecturer at the Music Technology Group at the Universitat Pompeu Fabra in Barcelona. I have also been head of research and development at PMC Technologies. I have been visiting researcher at Perception and Sound Design Team, IRCAM; CCRMA, Stanford; Auditory Lab, McGill. I have obtained my PhD "Profiles of Pitch Classes" at the Neural Information Processing Group (CS/EE) at Berlin University of Technology, receiving a scholarship from the Studienstiftung des deutschen Volkes. Before that, I studied mathematics at Bonn and Muenster University, completing a diploma in pure mathematics. Starting with playing the violin at age of 7, and studying also musicology and acting on the side I also have experience as a performer in concerts and theatre. I have (co-)authored 70 scientific papers. My interests include deep learning and reinforcement learning for music and sound analysis, game strategies and robotics, statistical models for music/ sound representation/expectation/generation, neural correlates of music and 3D (tele)vision, didactic tools for music and dance, and predictive maintenance in manufacturing.

Bob L. Sturm (Queen Mary University of London)

Bob L. Sturm is currently a Lecturer in Digital Media at the Centre for Digital Music (http://c4dm.eecs.qmul.ac.uk/) in the School of Electronic Engineering and Computer Science, Queen Mary University of London. He specialises in audio and music signal processing, machine listening, and evaluation. He organises the HORSE workshop at QMUL (http://c4dm.eecs.qmul.ac.uk/horse2016, http://c4dm.eecs.qmul.ac.uk/horse20167), which focuses on evaluation in applied machine learning. He is the recipient of the 2017 Multimedia Prize Paper Award for his article titled, “A Simple Method to Determine if a Music Information Retrieval System is a “Horse””, published in the IEEE Transactions on Multimedia (Vol. 16, No. 6, October 2014). He is one of the creators of the folk-rnn system for music transcription modelling and generation (https://github.com/IraKorshunova/folk-rnn).

Mark Plumbley (University of Surrey)

More from the Same Authors