Dingyi Zeng
Also published as: DingYi Zeng
2024
Beyond Single-Event Extraction: Towards Efficient Document-Level Multi-Event Argument Extraction
Wanlong Liu
|
Li Zhou
|
DingYi Zeng
|
Yichen Xiao
|
Shaohuan Cheng
|
Chen Zhang
|
Grandee Lee
|
Malu Zhang
|
Wenyu Chen
Findings of the Association for Computational Linguistics: ACL 2024
Recent mainstream event argument extraction methods process each event in isolation, resulting in inefficient inference and ignoring the correlations among multiple events. To address these limitations, here we propose a multiple-event argument extraction model DEEIA (Dependency-guided Encoding and Event-specific Information Aggregation), capable of extracting arguments from all events within a document simultaneously. The proposed DEEIA model employs a multi-event prompt mechanism, comprising DE and EIA modules. The DE module is designed to improve the correlation between prompts and their corresponding event contexts, whereas the EIA module provides event-specific information to improve contextual understanding. Extensive experiments show that our method achieves new state-of-the-art performance on four public datasets (RAMS, WikiEvents, MLEE, and ACE05), while significantly saving the inference time compared to the baselines. Further analyses demonstrate the effectiveness of the proposed modules.
2023
Enhancing Document-level Event Argument Extraction with Contextual Clues and Role Relevance
Wanlong Liu
|
Shaohuan Cheng
|
Dingyi Zeng
|
Qu Hong
Findings of the Association for Computational Linguistics: ACL 2023
Document-level event argument extraction poses new challenges of long input and cross-sentence inference compared to its sentence-level counterpart. However, most prior works focus on capturing the relations between candidate arguments and the event trigger in each event, ignoring two crucial points: a) non-argument contextual clue information; b) the relevance among argument roles. In this paper, we propose a SCPRG (Span-trigger-based Contextual Pooling and latent Role Guidance) model, which contains two novel and effective modules for the above problem. The Span-Trigger-based Contextual Pooling (STCP) adaptively selects and aggregates the information of non-argument clue words based on the context attention weights of specific argument-trigger pairs from pre-trained model. The Role-based Latent Information Guidance (RLIG) module constructs latent role representations, makes them interact through role-interactive encoding to capture semantic relevance, and merges them into candidate arguments. Both STCP and RLIG introduce no more than 1% new parameters compared with the base model and can be easily applied to other event extraction models, which are compact and transplantable. Experiments on two public datasets show that our SCPRG outperforms previous state-of-the-art methods, with 1.13 F1 and 2.64 F1 improvements on RAMS and WikiEvents respectively. Further analyses illustrate the interpretability of our model.
Search
Co-authors
- Wanlong Liu 2
- Shaohuan Cheng 2
- Qu Hong 1
- Li Zhou 1
- Yichen Xiao 1
- show all...