Thinking about how to extract: Energizing LLMs’ emergence capabilities for document-level event argument extraction

Kai Shuang, Zhouji Zhouji, Wang Qiwei, Jinyu Guo


Abstract
There are two key challenges remaining for the document-level event argument extraction (D-EAE) tasks: key feature forgetting and cross-event argument confusion. The emergence capability of large language models (LLMs) holds promise for solving the above two challenges. In this paper, we propose a document-level event argument extraction method based on guided summarization and reasoning (EAESR), which leverages the emergence capabilities of LLMs to highlight key event information and to clarify the explicit and implicit association between multiple events. Specifically, we generate document summarization information that shorten the length of the event context while preserving the key event features. In addition, we generate inter-event reasoning information, which helps EAESR make sense of the correlations between events and reduces their dependence on the event context, especially to better cope with the few-shot D-EAE task. Then, we obtain named entity information to enable EAESR to learn argument boundary features to improve the sensitivity of its argument boundary recognition. Eventually, we fused the above features and sentence features to make EAESR have summarizing and reasoning capabilities simultaneously. Extensive experiments on WIKIEVENTS and RAMS have shown that EAESR achieves a new state-of-the-art that outperforms the baseline models by 1.3% F1 and 1.6% F1, respectively, and averages 11% F1 in few-shot settings.
Anthology ID:
2024.findings-acl.328
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5520–5532
Language:
URL:
https://aclanthology.org/2024.findings-acl.328
DOI:
10.18653/v1/2024.findings-acl.328
Bibkey:
Cite (ACL):
Kai Shuang, Zhouji Zhouji, Wang Qiwei, and Jinyu Guo. 2024. Thinking about how to extract: Energizing LLMs’ emergence capabilities for document-level event argument extraction. In Findings of the Association for Computational Linguistics: ACL 2024, pages 5520–5532, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Thinking about how to extract: Energizing LLMs’ emergence capabilities for document-level event argument extraction (Shuang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.328.pdf