CMU’s IWSLT 2024 Simultaneous Speech Translation System

Xi Xu, Siqi Ouyang, Brian Yan, Patrick Fernandes, William Chen, Lei Li, Graham Neubig, Shinji Watanabe


Abstract
This paper describes CMU’s submission to the IWSLT 2024 Simultaneous Speech Translation (SST) task for translating English speech to German text in a streaming manner. Our end-to-end speech-to-text (ST) system integrates the WavLM speech encoder, a modality adapter, and the Llama2-7B-Base model as the decoder. We employ a two-stage training approach: initially, we align the representations of speech and text, followed by full fine-tuning. Both stages are trained on MuST-c v2 data with cross-entropy loss. We adapt our offline ST model for SST using a simple fixed hold-n policy. Experiments show that our model obtains an offline BLEU score of 31.1 and a BLEU score of 29.5 under 2 seconds latency on the MuST-C-v2 tst-COMMON.
Anthology ID:
2024.iwslt-1.20
Original:
2024.iwslt-1.20v1
Version 2:
2024.iwslt-1.20v2
Volume:
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand (in-person and online)
Editors:
Elizabeth Salesky, Marcello Federico, Marine Carpuat
Venue:
IWSLT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
154–159
Language:
URL:
https://aclanthology.org/2024.iwslt-1.20
DOI:
10.18653/v1/2024.iwslt-1.20
Bibkey:
Cite (ACL):
Xi Xu, Siqi Ouyang, Brian Yan, Patrick Fernandes, William Chen, Lei Li, Graham Neubig, and Shinji Watanabe. 2024. CMU’s IWSLT 2024 Simultaneous Speech Translation System. In Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024), pages 154–159, Bangkok, Thailand (in-person and online). Association for Computational Linguistics.
Cite (Informal):
CMU’s IWSLT 2024 Simultaneous Speech Translation System (Xu et al., IWSLT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.iwslt-1.20.pdf