Adapting Multilingual Models for Code-Mixed Translation

Aditya Vavre, Abhirut Gupta, Sunita Sarawagi


Abstract
The scarcity of gold standard code-mixed to pure language parallel data makes it difficult to train translation models reliably. Prior work has addressed the paucity of parallel data with data augmentation techniques. Such methods rely heavily on external resources making systems difficult to train and scale effectively for multiple languages. We present a simple yet highly effective two-stage back-translation based training scheme for adapting multilingual models to the task of code-mixed translation which eliminates dependence on external resources. We show a substantial improvement in translation quality (measured through BLEU), beating existing prior work by up to +3.8 BLEU on code-mixed HiEn, MrEn, and BnEn tasks. On the LinCE Machine Translation leader board, we achieve the highest score for code-mixed EsEn, beating existing best baseline by +6.5 BLEU, and our own stronger baseline by +1.1 BLEU.
Anthology ID:
2022.findings-emnlp.528
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7133–7141
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.528
DOI:
10.18653/v1/2022.findings-emnlp.528
Bibkey:
Cite (ACL):
Aditya Vavre, Abhirut Gupta, and Sunita Sarawagi. 2022. Adapting Multilingual Models for Code-Mixed Translation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 7133–7141, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Adapting Multilingual Models for Code-Mixed Translation (Vavre et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.528.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.528.mp4