Machine Translation Robustness to Natural Asemantic Variation

Jacob Bremerman, Xiang Ren, Jonathan May


Abstract
Current Machine Translation (MT) models still struggle with more challenging input, such as noisy data and tail-end words and phrases. Several works have addressed this robustness issue by identifying specific categories of noise and variation then tuning models to perform better on them. An important yet under-studied category involves minor variations in nuance (non-typos) that preserve meaning w.r.t. the target language. We introduce and formalize this category as Natural Asemantic Variation (NAV) and investigate it in the context of MT robustness. We find that existing MT models fail when presented with NAV data, but we demonstrate strategies to improve performance on NAV by fine-tuning them with human-generated variations. We also show that NAV robustness can be transferred across languages and find that synthetic perturbations can achieve some but not all of the benefits of organic NAV data.
Anthology ID:
2022.emnlp-main.230
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3517–3532
Language:
URL:
https://aclanthology.org/2022.emnlp-main.230
DOI:
10.18653/v1/2022.emnlp-main.230
Bibkey:
Cite (ACL):
Jacob Bremerman, Xiang Ren, and Jonathan May. 2022. Machine Translation Robustness to Natural Asemantic Variation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3517–3532, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Machine Translation Robustness to Natural Asemantic Variation (Bremerman et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.230.pdf