Grammatical Error Correction for Sentence-level Assessment in Language Learning

Anisia Katinskaia, Roman Yangarber


Abstract
The paper presents experiments on using a Grammatical Error Correction (GEC) model to assess the correctness of answers that language learners give to grammar exercises. We explored whether a GEC model can be applied in the language learning context for a language with complex morphology. We empirically check a hypothesis that a GEC model corrects only errors and leaves correct answers unchanged. We perform a test on assessing learner answers in a real but constrained language-learning setup: the learners answer only fill-in-the-blank and multiple-choice exercises. For this purpose, we use ReLCo, a publicly available manually annotated learner dataset in Russian (Katinskaia et al., 2022). In this experiment, we fine-tune a large-scale T5 language model for the GEC task and estimate its performance on the RULEC-GEC dataset (Rozovskaya and Roth, 2019) to compare with top-performing models. We also release an updated version of the RULEC-GEC test set, manually checked by native speakers. Our analysis shows that the GEC model performs reasonably well in detecting erroneous answers to grammar exercises and potentially can be used for best-performing error types in a real learning setup. However, it struggles to assess answers which were tagged by human annotators as alternative-correct using the aforementioned hypothesis. This is in large part due to a still low recall in correcting errors, and the fact that the GEC model may modify even correct words—it may generate plausible alternatives, which are hard to evaluate against the gold-standard reference.
Anthology ID:
2023.bea-1.41
Volume:
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
488–502
Language:
URL:
https://aclanthology.org/2023.bea-1.41
DOI:
10.18653/v1/2023.bea-1.41
Bibkey:
Cite (ACL):
Anisia Katinskaia and Roman Yangarber. 2023. Grammatical Error Correction for Sentence-level Assessment in Language Learning. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 488–502, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Grammatical Error Correction for Sentence-level Assessment in Language Learning (Katinskaia & Yangarber, BEA 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bea-1.41.pdf
Video:
 https://aclanthology.org/2023.bea-1.41.mp4