Program Highlights »
Workshop
Fri Dec 8th 08:00 AM -- 06:30 PM @ None
Machine Learning for Audio Signal Processing (ML4Audio)
Hendrik Purwins · Bob L. Sturm · Mark Plumbley





Workshop Home Page

Audio signal processing is currently undergoing a paradigm change, where data-driven machine learning is replacing hand-crafted feature design. This has led some to ask whether audio signal processing is still useful in the "era of machine learning." There are many challenges, new and old, including the interpretation of learned models in high dimensional spaces, problems associated with data-poor domains, adversarial examples, high computational requirements, and research driven by companies using large in-house datasets that is ultimately not reproducible.

ML4Audio (https://nips.cc/Conferences/2017/Schedule?showEvent=8790) aims to promote progress, systematization, understanding, and convergence of applying machine learning in the area of audio signal processing. Specifically, we are interested in work that demonstrates novel applications of machine learning techniques to audio data, as well as methodological considerations of merging machine learning with audio signal processing. We seek contributions in, but not limited to, the following topics:
- audio information retrieval using machine learning;
- audio synthesis with given contextual or musical constraints using machine learning;
- audio source separation using machine learning;
- audio transformations (e.g., sound morphing, style transfer) using machine learning;
- unsupervised learning, online learning, one-shot learning, reinforcement learning, and incremental learning for audio;
- applications/optimization of generative adversarial networks for audio;
- cognitively inspired machine learning models of sound cognition;
- mathematical foundations of machine learning for audio signal processing.

ML4Audio will accept five kinds of submissions:
1. novel unpublished work, including work-in-progress;
2. recent work that has been already published or is in review (please clearly refer to the primary publication);
3. review-style papers;
4. position papers;
5. system demonstrations.

Submissions: Extended abstracts as pdf in NIPS paper format, 2-4 pages, excluding references. Submissions do not need to be anonymised. Submissions might be either accepted as talks or as posters. Submission link: https://easychair.org/conferences/?conf=ml4audio

Publication: We are currently pursuing the organisation of a special journal issue of selected papers from the workshop, but all works presented at the workshop will be published online.

Important Dates:
Submission Deadline: October 20, 2017
Acceptance Notification: October 31, 2017
Camera Ready Submissions: November 30, 2017
Workshop: Dec 8, 2017

(Note that the main conference is sold out, but we have workshop tickets reserved for presenters of accepted papers.)

This workshop especially targets researchers, developers and musicians in academia and industry in the area of MIR, audio processing, hearing instruments, speech processing, musical HCI, musicology, music technology, music entertainment, and composition.

Invited Speakers:
Karen Livescu (Toyota Technological Institute at Chicago)
Sander Dieleman (Google DeepMind)
Douglas Eck (Google Magenta)
Marco Marchini (Spotify)
N.N. (Pandora)

Panel Discussion:
Sepp Hochreiter (Johannes Kepler University Linz)
Invited speakers
Others to be decided

ML4Audio Organisation Committee:
- Hendrik Purwins, Aalborg University Copenhagen, Denmark (hpu@create.aau.dk)
- Bob L. Sturm, Queen Mary University of London, UK (b.sturm@qmul.ac.uk)
- Mark Plumbley, University of Surrey, UK (m.plumbley@surrey.ac.uk)

PROGRAM COMMITTEE:
Abeer Alwan (University of California, Los Angeles)
Jon Barker (University of Sheffield)
Sebastian Böck (Johannes Kepler University Linz)
Mads Græsbøll Christensen (Aalborg University)
Maximo Cobos (Universitat de Valencia)
Sander Dieleman (Google DeepMind)
Monika Dörfler (University of Vienna)
Shlomo Dubnov (UC San Diego)
Philippe Esling (IRCAM)
Cédric Févotte (IRIT)
Emilia Gómez (Universitat Pompeu Fabra)
Emanuël Habets (International Audio Labs Erlangen)
Jan Larsen (Danish Technical University)
Marco Marchini (Spotify)
Ricard Marxer (University of Toulon)
Rafael Ramirez (Universitat Pompeu Fabra)
Gaël Richard (TELECOM ParisTech)
Fatemeh Saki (UT Dallas)
Jan Schlüter (Austrian Research Institute for Artificial Intelligence)
Joan Serrà (Telefonica)
Malcolm Slaney (Google)
Emmanuel Vincent (INRIA Nancy)
Gerhard Widmer (Austrian Research Institute for Artificial Intelligence)
Tao Zhang (Starkey Hearing Technologies)
Others to be decided

08:00 AM Welcome and Introduction
Hendrik Purwins
08:30 AM Learning and transforming sound for interactive musical applications
Marco Marchini
09:00 AM N.N. (Pandora)
09:30 AM Acoustic word embeddings for speech search
Karen Livescu
10:00 AM Poster Spotlights
12:30 PM Lunch Break
01:30 PM Polyphonic piano transcription using deep neural networks
Douglas Eck
02:00 PM Deep learning for music recommendation and generation
Sander Dieleman
03:00 PM Coffee break and poster session
04:00 PM Paper Talks
05:45 PM Machine learning and audio signal processing: State of the art and future perspectives
Hendrik Purwins, Sepp Hochreiter, Marco Marchini