Generating Synthetic Speech from SpokenVocab for Speech Translation

Jinming Zhao, Gholamreza Haffari, Ehsan Shareghi


Abstract
Training end-to-end speech translation (ST) systems requires sufficiently large-scale data, which is unavailable for most language pairs and domains. One practical solution to the data scarcity issue is to convert text-based machine translation (MT) data to ST data via text-to-speech (TTS) systems. Yet, using TTS systems can be tedious and slow. In this work, we propose SpokenVocab, a simple, scalable and effective data augmentation technique to convert MT data to ST data on-the-fly. The idea is to retrieve and stitch audio snippets, corresponding to words in an MT sentence, from a spoken vocabulary bank. Our experiments on multiple language pairs show that stitched speech helps to improve translation quality by an average of 1.83 BLEU score, while performing equally well as TTS-generated speech in improving translation quality. We also showcase how SpokenVocab can be applied in code-switching ST for which often no TTS systems exit.
Anthology ID:
2023.findings-eacl.147
Volume:
Findings of the Association for Computational Linguistics: EACL 2023
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1975–1981
Language:
URL:
https://aclanthology.org/2023.findings-eacl.147
DOI:
10.18653/v1/2023.findings-eacl.147
Bibkey:
Cite (ACL):
Jinming Zhao, Gholamreza Haffari, and Ehsan Shareghi. 2023. Generating Synthetic Speech from SpokenVocab for Speech Translation. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1975–1981, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Generating Synthetic Speech from SpokenVocab for Speech Translation (Zhao et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-eacl.147.pdf
Video:
 https://aclanthology.org/2023.findings-eacl.147.mp4