Neural Machine Translation of Text from Non-Native Speakers

Antonios Anastasopoulos, Alison Lui, Toan Q. Nguyen, David Chiang


Abstract
Neural Machine Translation (NMT) systems are known to degrade when confronted with noisy data, especially when the system is trained only on clean data. In this paper, we show that augmenting training data with sentences containing artificially-introduced grammatical errors can make the system more robust to such errors. In combination with an automatic grammar error correction system, we can recover 1.0 BLEU out of 2.4 BLEU lost due to grammatical errors. We also present a set of Spanish translations of the JFLEG grammar error correction corpus, which allows for testing NMT robustness to real grammatical errors.
Anthology ID:
N19-1311
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3070–3080
Language:
URL:
https://aclanthology.org/N19-1311
DOI:
10.18653/v1/N19-1311
Bibkey:
Cite (ACL):
Antonios Anastasopoulos, Alison Lui, Toan Q. Nguyen, and David Chiang. 2019. Neural Machine Translation of Text from Non-Native Speakers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3070–3080, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Neural Machine Translation of Text from Non-Native Speakers (Anastasopoulos et al., NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1311.pdf
Software:
 N19-1311.Software.zip
Video:
 https://vimeo.com/361713720
Code
 antonis/nmt-grammar-noise +  additional community code