Timezone: »

 
Deep learning for music recommendation and generation
Sander Dieleman

Fri Dec 08 02:00 PM -- 02:30 PM (PST) @
Event URL: https://deepmind.com/blog/wavenet-generative-model-raw-audio/ »

The advent of deep learning has made it possible to extract high-level information from perceptual signals without having to specify manually and explicitly how to obtain it; instead, this can be learned from examples. This creates opportunities for automated content analysis of musical audio signals. In this talk, I will discuss how deep learning techniques can be used for audio-based music recommendation. I will also discuss my ongoing work on music generation in the raw waveform domain with WaveNet.

Sander Dieleman is a Research Scientist at DeepMind in London, UK, where he has worked on the development of AlphaGo and WaveNet. He was previously a PhD student at Ghent University, where he conducted research on feature learning and deep learning techniques for learning hierarchical representations of musical audio signals. During his PhD he also developed the Theano-based deep learning library Lasagne and won solo and team gold medals respectively in Kaggle's "Galaxy Zoo" competition and the first National Data Science Bowl. In the summer of 2014, he interned at Spotify in New York, where he worked on implementing audio-based music recommendation using deep learning on an industrial scale.

Author Information

Sander Dieleman (DeepMind)

More from the Same Authors