Does Simultaneous Speech Translation need Simultaneous Models?

Sara Papi, Marco Gaido, Matteo Negri, Marco Turchi


Abstract
In simultaneous speech translation (SimulST), finding the best trade-off between high output quality and low latency is a challenging task. To meet the latency constraints posed by different application scenarios, multiple dedicated SimulST models are usually trained and maintained, generating high computational costs. In this paper, also motivated by the increased sensitivity towards sustainable AI, we investigate whether a single model trained offline can serve both offline and simultaneous applications under different latency regimes without additional training or adaptation. Experiments on en->de, es show that, aside from facilitating the adoption of well-established offline architectures and training strategies without affecting latency, offline training achieves similar or better quality compared to the standard SimulST training protocol, also being competitive with the state-of-the-art system.
Anthology ID:
2022.findings-emnlp.11
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
141–153
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.11
DOI:
10.18653/v1/2022.findings-emnlp.11
Bibkey:
Cite (ACL):
Sara Papi, Marco Gaido, Matteo Negri, and Marco Turchi. 2022. Does Simultaneous Speech Translation need Simultaneous Models?. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 141–153, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Does Simultaneous Speech Translation need Simultaneous Models? (Papi et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.11.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.11.mp4