Leveraging Synthetic Audio Data for End-to-End Low-Resource Speech Translation

Yasmin Moslem


Abstract
This paper describes our system submission to the International Conference on Spoken Language Translation (IWSLT 2024) for Irish-to-English speech translation. We built end-to-end systems based on Whisper, and employed a number of data augmentation techniques, such as speech back-translation and noise augmentation. We investigate the effect of using synthetic audio data and discuss several methods for enriching signal diversity.
Anthology ID:
2024.iwslt-1.31
Volume:
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand (in-person and online)
Editors:
Elizabeth Salesky, Marcello Federico, Marine Carpuat
Venue:
IWSLT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
265–273
Language:
URL:
https://aclanthology.org/2024.iwslt-1.31
DOI:
Bibkey:
Cite (ACL):
Yasmin Moslem. 2024. Leveraging Synthetic Audio Data for End-to-End Low-Resource Speech Translation. In Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024), pages 265–273, Bangkok, Thailand (in-person and online). Association for Computational Linguistics.
Cite (Informal):
Leveraging Synthetic Audio Data for End-to-End Low-Resource Speech Translation (Moslem, IWSLT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.iwslt-1.31.pdf