AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation

Xingshan Zeng, Liangyou Li, Qun Liu


Abstract
To alleviate the data scarcity problem in End-to-end speech translation (ST), pre-training on data for speech recognition and machine translation is considered as an important technique. However, the modality gap between speech and text prevents the ST model from efficiently inheriting knowledge from the pre-trained models. In this work, we propose AdaTranS for end-to-end ST. It adapts the speech features with a new shrinking mechanism to mitigate the length mismatch between speech and text features by predicting word boundaries. Experiments on the MUST-C dataset demonstrate that AdaTranS achieves better performance than the other shrinking-based methods, with higher inference speed and lower memory usage. Further experiments also show that AdaTranS can be equipped with additional alignment losses to further improve performance.
Anthology ID:
2023.findings-emnlp.154
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2353–2361
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.154
DOI:
10.18653/v1/2023.findings-emnlp.154
Bibkey:
Cite (ACL):
Xingshan Zeng, Liangyou Li, and Qun Liu. 2023. AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2353–2361, Singapore. Association for Computational Linguistics.
Cite (Informal):
AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation (Zeng et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.154.pdf