Grammatical Error Correction with Neural Reinforcement Learning

Keisuke Sakaguchi, Matt Post, Benjamin Van Durme


Abstract
We propose a neural encoder-decoder model with reinforcement learning (NRL) for grammatical error correction (GEC). Unlike conventional maximum likelihood estimation (MLE), the model directly optimizes towards an objective that considers a sentence-level, task-specific evaluation metric, avoiding the exposure bias issue in MLE. We demonstrate that NRL outperforms MLE both in human and automated evaluation metrics, achieving the state-of-the-art on a fluency-oriented GEC corpus.
Anthology ID:
I17-2062
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
366–372
Language:
URL:
https://aclanthology.org/I17-2062
DOI:
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/I17-2062.pdf
Data
FCEJFLEG