Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Instruction Tuning and Instruction Following

Training Speech Recognition Models to Follow Instructions

Cheng-I Jeff Lai · Zhiyun Lu · Liangliang Cao · Ruoming Pang

Keywords: [ Speech Recognition ] [ Speech Foundation Model ] [ large language model ] [ instruction-following ]


Abstract:

Conventional end-to-end Automatic Speech Recognition (ASR) models primarily focus on exact transcription tasks, lacking flexibility for nuanced user interactions. In this paper, we train a speech recognition model to follow a diverse set of free-form text instructions for a multitude of speech recognition tasks -- ranging from simple transcript manipulation to summarization. We emphasize that even without pre-trained LLMs or speech modules, a Listen-Attend-Spell model trained from scratch on Librispeech understands and executes instructions with high fidelity. This preliminary findings highlight the potential of instruction-following training to advance speech foundation models.

Chat is not available.