Evaluating Grammatical Well-Formedness in Large Language Models: A Comparative Study with Human Judgments

Zhuang Qiu, Xufeng Duan, Zhenguang Cai


Abstract
Research in artificial intelligence has witnessed the surge of large language models (LLMs) demonstrating improved performance in various natural language processing tasks. This has sparked significant discussions about the extent to which large language models emulate human linguistic cognition and usage. This study delves into the representation of grammatical well-formedness in LLMs, which is a critical aspect of linguistic knowledge. In three preregistered experiments, we collected grammaticality judgment data for over 2400 English sentences with varying structures from ChatGPT and Vicuna, comparing them with human judgment data. The results reveal substantial alignment in the assessment of grammatical correctness between LLMs and human judgments, albeit with LLMs often showing more conservative judgments for grammatical correctness or incorrectness.
Anthology ID:
2024.cmcl-1.16
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Yohei Oseki
Venues:
CMCL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
189–198
Language:
URL:
https://aclanthology.org/2024.cmcl-1.16
DOI:
10.18653/v1/2024.cmcl-1.16
Bibkey:
Cite (ACL):
Zhuang Qiu, Xufeng Duan, and Zhenguang Cai. 2024. Evaluating Grammatical Well-Formedness in Large Language Models: A Comparative Study with Human Judgments. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 189–198, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Evaluating Grammatical Well-Formedness in Large Language Models: A Comparative Study with Human Judgments (Qiu et al., CMCL-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.cmcl-1.16.pdf