Evaluating Lexical Aspect with Large Language Models

Bolei Ma


Abstract
In this study, we explore the proficiency of large language models (LLMs) in understanding two key lexical aspects: duration (durative/stative) and telicity (telic/atelic). Through experiments on datasets featuring sentences, verbs, and verb positions, we prompt the LLMs to identify aspectual features of verbs in sentences. Our findings reveal that certain LLMs, particularly those closed-source ones, are able to capture information on duration and telicity, albeit with some performance variations and weaker results compared to the baseline. By employing prompts at three levels (sentence-only, sentence with verb, and sentence with verb and its position), we demonstrate that integrating verb information generally enhances performance in aspectual feature recognition, though it introduces instability. We call for future research to look deeper into methods aimed at optimizing LLMs for aspectual feature comprehension.
Anthology ID:
2024.cmcl-1.11
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Yohei Oseki
Venues:
CMCL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
123–131
Language:
URL:
https://aclanthology.org/2024.cmcl-1.11
DOI:
Bibkey:
Cite (ACL):
Bolei Ma. 2024. Evaluating Lexical Aspect with Large Language Models. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 123–131, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Evaluating Lexical Aspect with Large Language Models (Ma, CMCL-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.cmcl-1.11.pdf