End-to-End Simultaneous Speech Translation with Differentiable Segmentation

Shaolei Zhang, Yang Feng


Abstract
End-to-end simultaneous speech translation (SimulST) outputs translation while receiving the streaming speech inputs (a.k.a. streaming speech translation), and hence needs to segment the speech inputs and then translate based on the current received speech. However, segmenting the speech inputs at unfavorable moments can disrupt the acoustic integrity and adversely affect the performance of the translation model. Therefore, learning to segment the speech inputs at those moments that are beneficial for the translation model to produce high-quality translation is the key to SimulST. Existing SimulST methods, either using the fixed-length segmentation or external segmentation model, always separate segmentation from the underlying translation model, where the gap results in segmentation outcomes that are not necessarily beneficial for the translation process. In this paper, we propose Differentiable Segmentation (DiSeg) for SimulST to directly learn segmentation from the underlying translation model. DiSeg turns hard segmentation into differentiable through the proposed expectation training, enabling it to be jointly trained with the translation model and thereby learn translation-beneficial segmentation. Experimental results demonstrate that DiSeg achieves state-of-the-art performance and exhibits superior segmentation capability.
Anthology ID:
2023.findings-acl.485
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7659–7680
Language:
URL:
https://aclanthology.org/2023.findings-acl.485
DOI:
10.18653/v1/2023.findings-acl.485
Bibkey:
Cite (ACL):
Shaolei Zhang and Yang Feng. 2023. End-to-End Simultaneous Speech Translation with Differentiable Segmentation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7659–7680, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
End-to-End Simultaneous Speech Translation with Differentiable Segmentation (Zhang & Feng, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.485.pdf