Addressing the Vulnerability of NMT in Input Perturbations
Weiwen Xu | Ai Ti Aw | Yang Ding | Kui Wu | Shafiq Joty
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers
Neural Machine Translation (NMT) has achieved significant breakthrough in performance but is known to suffer vulnerability to input perturbations. As real input noise is difficult to predict during training, robustness is a big issue for system deployment. In this paper, we improve the robustness of NMT models by reducing the effect of noisy words through a Context-Enhanced Reconstruction (CER) approach. CER trains the model to resist noise in two steps: (1) perturbation step that breaks the naturalness of input sequence with made-up words; (2) reconstruction step that defends the noise propagation by generating better and more robust contextual representation. Experimental results on Chinese-English (ZH-EN) and French-English (FR-EN) translation tasks demonstrate robustness improvement on both news and social media text. Further fine-tuning experiments on social media text show our approach can converge at a higher position and provide a better adaptation.
Lexical Chain Based Cohesion Models for Document-Level Statistical Machine Translation
Deyi Xiong | Yang Ding | Min Zhang | Chew Lim Tan
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
- Deyi Xiong 1
- Min Zhang 1
- Chew Lim Tan 1
- Weiwen Xu 1
- Aiti Aw 1
- show all...