Rethinking and Improving Multi-task Learning for End-to-end Speech Translation

Yuhao Zhang, Chen Xu, Bei Li, Hao Chen, Tong Xiao, Chunliang Zhang, Jingbo Zhu


Abstract
Significant improvements in end-to-end speech translation (ST) have been achieved through the application of multi-task learning. However, the extent to which auxiliary tasks are highly consistent with the ST task, and how much this approach truly helps, have not been thoroughly studied. In this paper, we investigate the consistency between different tasks, considering different times and modules. We find that the textual encoder primarily facilitates cross-modal conversion, but the presence of noise in speech impedes the consistency between text and speech representations. Furthermore, we propose an improved multi-task learning (IMTL) approach for the ST task, which bridges the modal gap by mitigating the difference in length and representation. We conduct experiments on the MuST-C dataset. The results demonstrate that our method attains state-of-the-art results. Moreover, when additional data is used, we achieve the new SOTA result on MuST-C English to Spanish task with 20.8% of the training time required by the current SOTA method.
Anthology ID:
2023.emnlp-main.663
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10753–10765
Language:
URL:
https://aclanthology.org/2023.emnlp-main.663
DOI:
10.18653/v1/2023.emnlp-main.663
Bibkey:
Cite (ACL):
Yuhao Zhang, Chen Xu, Bei Li, Hao Chen, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2023. Rethinking and Improving Multi-task Learning for End-to-end Speech Translation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10753–10765, Singapore. Association for Computational Linguistics.
Cite (Informal):
Rethinking and Improving Multi-task Learning for End-to-end Speech Translation (Zhang et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.663.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.663.mp4