ProQE: Proficiency-wise Quality Estimation dataset for Grammatical Error Correction

Yujin Takahashi, Masahiro Kaneko, Masato Mita, Mamoru Komachi


Abstract
This study investigates how supervised quality estimation (QE) models of grammatical error correction (GEC) are affected by the learners’ proficiency with the data. QE models for GEC evaluations in prior work have obtained a high correlation with manual evaluations. However, when functioning in a real-world context, the data used for the reported results have limitations because prior works were biased toward data by learners with relatively high proficiency levels. To address this issue, we created a QE dataset that includes multiple proficiency levels and explored the necessity of performing proficiency-wise evaluation for QE of GEC. Our experiments demonstrated that differences in evaluation dataset proficiency affect the performance of QE models, and proficiency-wise evaluation helps create more robust models.
Anthology ID:
2022.lrec-1.644
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
5994–6000
Language:
URL:
https://aclanthology.org/2022.lrec-1.644
DOI:
Bibkey:
Cite (ACL):
Yujin Takahashi, Masahiro Kaneko, Masato Mita, and Mamoru Komachi. 2022. ProQE: Proficiency-wise Quality Estimation dataset for Grammatical Error Correction. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 5994–6000, Marseille, France. European Language Resources Association.
Cite (Informal):
ProQE: Proficiency-wise Quality Estimation dataset for Grammatical Error Correction (Takahashi et al., LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.644.pdf