Evaluating Robustness to Input Perturbations for Neural Machine Translation

Xing Niu, Prashant Mathur, Georgiana Dinu, Yaser Al-Onaizan


Abstract
Neural Machine Translation (NMT) models are sensitive to small perturbations in the input. Robustness to such perturbations is typically measured using translation quality metrics such as BLEU on the noisy input. This paper proposes additional metrics which measure the relative degradation and changes in translation when small perturbations are added to the input. We focus on a class of models employing subword regularization to address robustness and perform extensive evaluations of these models using the robustness measures proposed. Results show that our proposed metrics reveal a clear trend of improved robustness to perturbations when subword regularization methods are used.
Anthology ID:
2020.acl-main.755
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8538–8544
Language:
URL:
https://aclanthology.org/2020.acl-main.755
DOI:
10.18653/v1/2020.acl-main.755
Bibkey:
Cite (ACL):
Xing Niu, Prashant Mathur, Georgiana Dinu, and Yaser Al-Onaizan. 2020. Evaluating Robustness to Input Perturbations for Neural Machine Translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8538–8544, Online. Association for Computational Linguistics.
Cite (Informal):
Evaluating Robustness to Input Perturbations for Neural Machine Translation (Niu et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.755.pdf
Video:
 http://slideslive.com/38929225
Data
MTNT