Timezone: »

A Spectral Energy Distance for Parallel Speech Synthesis
Alexey Gritsenko · Tim Salimans · Rianne van den Berg · Jasper Snoek · Nal Kalchbrenner

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #984

Speech synthesis is an important practical generative modeling problem that has seen great progress over the last few years, with likelihood-based autoregressive neural models now outperforming traditional concatenative systems. A downside of such autoregressive models is that they require executing tens of thousands of sequential operations per second of generated audio, making them ill-suited for deployment on specialized deep learning hardware. Here, we propose a new learning method that allows us to train highly parallel models of speech, without requiring access to an analytical likelihood function. Our approach is based on a generalized energy distance between the distributions of the generated and real audio. This spectral energy distance is a proper scoring rule with respect to the distribution over magnitude-spectrograms of the generated waveform audio and offers statistical consistency guarantees. The distance can be calculated from minibatches without bias, and does not involve adversarial learning, yielding a stable and consistent method for training implicit generative models. Empirically, we achieve state-of-the-art generation quality among implicit generative models, as judged by the recently-proposed cFDSD metric. When combining our method with adversarial techniques, we also improve upon the recently-proposed GAN-TTS model in terms of Mean Opinion Score as judged by trained human evaluators.

Author Information

Alexey Gritsenko (Google Research)
Tim Salimans (Google Brain Amsterdam)
Rianne van den Berg (Google Brain)
Jasper Snoek (Google Research, Brain team)

Jasper Snoek is a research scientist at Google Brain. His research has touched a variety of topics at the intersection of Bayesian methods and deep learning. He completed his PhD in machine learning at the University of Toronto. He subsequently held postdoctoral fellowships at the University of Toronto, under Geoffrey Hinton and Ruslan Salakhutdinov, and at the Harvard Center for Research on Computation and Society, under Ryan Adams. Jasper co-founded a Bayesian optimization focused startup, Whetlab, which was acquired by Twitter. He has served as an Area Chair for NeurIPS, ICML, AISTATS and ICLR, and organized a variety of workshops at ICML and NeurIPS.

Nal Kalchbrenner (Google Brain)

More from the Same Authors