Comparative Study of Models Trained on Synthetic Data for Ukrainian Grammatical Error Correction

Maksym Bondarenko, Artem Yushko, Andrii Shportko, Andrii Fedorych


Abstract
The task of Grammatical Error Correction (GEC) has been extensively studied for the English language. However, its application to low-resource languages, such as Ukrainian, remains an open challenge. In this paper, we develop sequence tagging and neural machine translation models for the Ukrainian language as well as a set of algorithmic correction rules to augment those systems. We also develop synthetic data generation techniques for the Ukrainian language to create high-quality human-like errors. Finally, we determine the best combination of synthetically generated data to augment the existing UA-GEC corpus and achieve the state-of-the-art results of 0.663 F0.5 score on the newly established UA-GEC benchmark. The code and trained models will be made publicly available on GitHub and HuggingFace.
Anthology ID:
2023.unlp-1.13
Volume:
Proceedings of the Second Ukrainian Natural Language Processing Workshop (UNLP)
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editor:
Mariana Romanyshyn
Venue:
UNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
103–113
Language:
URL:
https://aclanthology.org/2023.unlp-1.13
DOI:
10.18653/v1/2023.unlp-1.13
Bibkey:
Cite (ACL):
Maksym Bondarenko, Artem Yushko, Andrii Shportko, and Andrii Fedorych. 2023. Comparative Study of Models Trained on Synthetic Data for Ukrainian Grammatical Error Correction. In Proceedings of the Second Ukrainian Natural Language Processing Workshop (UNLP), pages 103–113, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Comparative Study of Models Trained on Synthetic Data for Ukrainian Grammatical Error Correction (Bondarenko et al., UNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.unlp-1.13.pdf
Video:
 https://aclanthology.org/2023.unlp-1.13.mp4