Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Second Workshop on Efficient Natural Language and Speech Processing (ENLSP-II)

Efficient Speech Translation with Pre-trained models

Zhaolin Li · Jan Niehues

Keywords: [ Efficient Graphs for NLP ] [ ENLSP-Main ]


Abstract:

When building state-of-the-art speech translation models, the need for large computational resources is a significant obstacle due to the large training data size and complex models. The availability of pre-trained models is a promising opportunity to build strong speech translation systems efficiently. In a first step, we investigate efficient strategies to build cascaded and end-to-end speech translation systems based on pre-trained models. Using this strategy, we can train and apply the models on a single GPU. While the end-to-end models show superior translation performance to cascaded ones, the application of this technology has a limitation on the need for additional end-to-end training data. In a second step, we proposed an additional similarity loss to encourage the model to generate similar hidden representations for speech and transcript. Using this technique, we can increase the data efficiency and improve the translation quality by 6 BLEU points in scenarios with limited end-to-end training data.

Chat is not available.