CTC-based Non-autoregressive Textless Speech-to-Speech Translation

Qingkai Fang, Zhengrui Ma, Yan Zhou, Min Zhang, Yang Feng


Abstract
Direct speech-to-speech translation (S2ST) has achieved impressive translation quality, but it often faces the challenge of slow decoding due to the considerable length of speech sequences. Recently, some research has turned to non-autoregressive (NAR) models to expedite decoding, yet the translation quality typically lags behind autoregressive (AR) models significantly. In this paper, we investigate the performance of CTC-based NAR models in S2ST, as these models have shown impressive results in machine translation. Experimental results demonstrate that by combining pretraining, knowledge distillation, and advanced NAR training techniques such as glancing training and non-monotonic latent alignments, CTC-based NAR models achieve translation quality comparable to the AR model, while preserving up to 26.81× decoding speedup.
Anthology ID:
2024.findings-acl.543
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9155–9161
Language:
URL:
https://aclanthology.org/2024.findings-acl.543
DOI:
10.18653/v1/2024.findings-acl.543
Bibkey:
Cite (ACL):
Qingkai Fang, Zhengrui Ma, Yan Zhou, Min Zhang, and Yang Feng. 2024. CTC-based Non-autoregressive Textless Speech-to-Speech Translation. In Findings of the Association for Computational Linguistics: ACL 2024, pages 9155–9161, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
CTC-based Non-autoregressive Textless Speech-to-Speech Translation (Fang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.543.pdf