Skip to yearly menu bar Skip to main content


Poster

ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text Translation

Chenyang Le · Yao Qian · Long Zhou · Shujie LIU · Yanmin Qian · Michael Zeng · Xuedong Huang

Great Hall & Hall B1+B2 (level 1) #331
[ ]
[ Paper [ Poster [ OpenReview
Tue 12 Dec 3:15 p.m. PST — 5:15 p.m. PST

Abstract:

Joint speech-language training is challenging due to the large demand for training data and GPU consumption, as well as the modality gap between speech and language. We present ComSL, a speech-language model built atop a composite architecture of public pre-trained speech-only and language-only models and optimized data-efficiently for spoken language tasks. Particularly, we propose to incorporate cross-modality learning into transfer learning and conduct them simultaneously for downstream tasks in a multi-task learning manner. Our approach has demonstrated effectiveness in end-to-end speech-to-text translation tasks, achieving a new state-of-the-art average BLEU score of 31.5 on the multilingual speech to English text translation task for 21 languages, as measured on the public CoVoST2 evaluation set.

Chat is not available.