Automatic Grammatical Error Correction for Sequence-to-sequence Text Generation: An Empirical Study

Tao Ge, Xingxing Zhang, Furu Wei, Ming Zhou


Abstract
Sequence-to-sequence (seq2seq) models have achieved tremendous success in text generation tasks. However, there is no guarantee that they can always generate sentences without grammatical errors. In this paper, we present a preliminary empirical study on whether and how much automatic grammatical error correction can help improve seq2seq text generation. We conduct experiments across various seq2seq text generation tasks including machine translation, formality style transfer, sentence compression and simplification. Experiments show the state-of-the-art grammatical error correction system can improve the grammaticality of generated text and can bring task-oriented improvements in the tasks where target sentences are in a formal style.
Anthology ID:
P19-1609
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6059–6064
Language:
URL:
https://aclanthology.org/P19-1609
DOI:
10.18653/v1/P19-1609
Bibkey:
Cite (ACL):
Tao Ge, Xingxing Zhang, Furu Wei, and Ming Zhou. 2019. Automatic Grammatical Error Correction for Sequence-to-sequence Text Generation: An Empirical Study. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6059–6064, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Automatic Grammatical Error Correction for Sequence-to-sequence Text Generation: An Empirical Study (Ge et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1609.pdf
Supplementary:
 P19-1609.Supplementary.pdf
Data
GYAFCSentence Compression