Revisiting Event Argument Extraction: Can EAE Models Learn Better When Being Aware of Event Co-occurrences?

Yuxin He, Jingyue Hu, Buzhou Tang


Abstract
Event co-occurrences have been proved effective for event extraction (EE) in previous studies, but have not been considered for event argument extraction (EAE) recently. In this paper, we try to fill this gap between EE research and EAE research, by highlighting the question that “Can EAE models learn better when being aware of event co-occurrences?”. To answer this question, we reformulate EAE as a problem of table generation and extend a SOTA prompt-based EAE model into a non-autoregressive generation framework, called TabEAE, which is able to extract the arguments of multiple events in parallel. Under this framework, we experiment with 3 different training-inference schemes on 4 datasets (ACE05, RAMS, WikiEvents and MLEE) and discover that via training the model to extract all events in parallel, it can better distinguish the semantic boundary of each event and its ability to extract single event gets substantially improved. Experimental results show that our method achieves new state-of-the-art performance on the 4 datasets. Our code is avilable at https://github.com/Stardust-hyx/TabEAE.
Anthology ID:
2023.acl-long.701
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12542–12556
Language:
URL:
https://aclanthology.org/2023.acl-long.701
DOI:
10.18653/v1/2023.acl-long.701
Bibkey:
Cite (ACL):
Yuxin He, Jingyue Hu, and Buzhou Tang. 2023. Revisiting Event Argument Extraction: Can EAE Models Learn Better When Being Aware of Event Co-occurrences?. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12542–12556, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Revisiting Event Argument Extraction: Can EAE Models Learn Better When Being Aware of Event Co-occurrences? (He et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.701.pdf
Video:
 https://aclanthology.org/2023.acl-long.701.mp4