Large Language Model-Based Event Relation Extraction with Rationales

Zhilei Hu, Zixuan Li, Xiaolong Jin, Long Bai, Jiafeng Guo, Xueqi Cheng


Abstract
Event Relation Extraction (ERE) aims to extract various types of relations between different events within texts. Although Large Language Models (LLMs) have demonstrated impressive capabilities in many natural language processing tasks, existing ERE methods based on LLMs still face three key challenges: (1) Time Inefficiency: The existing pairwise method of combining events and determining their relations is time-consuming for LLMs. (2) Low Coverage: When dealing with numerous events in a document, the limited generation length of fine-tuned LLMs restricts the coverage of their extraction results. (3) Lack of Rationale: Essential rationales concerning the results that could enhance the reasoning ability of the model are overlooked. To address these challenges, we propose LLMERE, an LLM-based approach with rationales for the ERE task. LLMERE transforms ERE into a question-and-answer task that may have multiple answers. By extracting all events related to a specified event at once, LLMERE reduces time complexity from O(n2) to O(n), compared to the pairwise method. Subsequently, LLMERE enhances the coverage of extraction results by employing a partitioning strategy that highlights only a portion of the events in the document at a time. In addition to the extracted results, LLMERE is also required to generate corresponding rationales/reasons behind them, in terms of event coreference information or transitive chains of event relations. Experimental results on three widely used datasets show that LLMERE achieves significant improvements over baseline methods.
Anthology ID:
2025.coling-main.500
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7484–7496
Language:
URL:
https://aclanthology.org/2025.coling-main.500/
DOI:
Bibkey:
Cite (ACL):
Zhilei Hu, Zixuan Li, Xiaolong Jin, Long Bai, Jiafeng Guo, and Xueqi Cheng. 2025. Large Language Model-Based Event Relation Extraction with Rationales. In Proceedings of the 31st International Conference on Computational Linguistics, pages 7484–7496, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Large Language Model-Based Event Relation Extraction with Rationales (Hu et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.500.pdf