JHU IWSLT 2022 Dialect Speech Translation System Description

Jinyi Yang, Amir Hussein, Matthew Wiesner, Sanjeev Khudanpur


Abstract
This paper details the Johns Hopkins speech translation (ST) system used in the IWLST2022 dialect speech translation task. Our system uses a cascade of automatic speech recognition (ASR) and machine translation (MT). We use a Conformer model for ASR systems and a Transformer model for machine translation. Surprisingly, we found that while using additional ASR training data resulted in only a negligible change in performance as measured by BLEU or word error rate (WER), aggressive text normalization improved BLEU more significantly. We also describe an approach, similar to back-translation, for improving performance using synthetic dialectal source text produced from source sentences in mismatched dialects.
Anthology ID:
2022.iwslt-1.29
Volume:
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
Month:
May
Year:
2022
Address:
Dublin, Ireland (in-person and online)
Venues:
ACL | IWSLT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
319–326
Language:
URL:
https://aclanthology.org/2022.iwslt-1.29
DOI:
10.18653/v1/2022.iwslt-1.29
Bibkey:
Cite (ACL):
Jinyi Yang, Amir Hussein, Matthew Wiesner, and Sanjeev Khudanpur. 2022. JHU IWSLT 2022 Dialect Speech Translation System Description. In Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 319–326, Dublin, Ireland (in-person and online). Association for Computational Linguistics.
Cite (Informal):
JHU IWSLT 2022 Dialect Speech Translation System Description (Yang et al., IWSLT 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.iwslt-1.29.pdf