A Multi-Format Transfer Learning Model for Event Argument Extraction via Variational Information Bottleneck

Jie Zhou, Qi Zhang, Qin Chen, Qi Zhang, Liang He, Xuanjing Huang


Abstract
Event argument extraction (EAE) aims to extract arguments with given roles from texts, which have been widely studied in natural language processing. Most previous works have achieved good performance in specific EAE datasets with dedicated neural architectures. Whereas, these architectures are usually difficult to adapt to new datasets/scenarios with various annotation schemas or formats. Furthermore, they rely on large-scale labeled data for training, which is unavailable due to the high labelling cost in most cases. In this paper, we propose a multi-format transfer learning model with variational information bottleneck, which makes use of the information especially the common knowledge in existing datasets for EAE in new datasets. Specifically, we introduce a shared-specific prompt framework to learn both format-shared and format-specific knowledge from datasets with different formats. In order to further absorb the common knowledge for EAE and eliminate the irrelevant noise, we integrate variational information bottleneck into our architecture to refine the shared representation. We conduct extensive experiments on three benchmark datasets, and obtain new state-of-the-art performance on EAE.
Anthology ID:
2022.coling-1.173
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
1990–2000
Language:
URL:
https://aclanthology.org/2022.coling-1.173
DOI:
Bibkey:
Cite (ACL):
Jie Zhou, Qi Zhang, Qin Chen, Qi Zhang, Liang He, and Xuanjing Huang. 2022. A Multi-Format Transfer Learning Model for Event Argument Extraction via Variational Information Bottleneck. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1990–2000, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
A Multi-Format Transfer Learning Model for Event Argument Extraction via Variational Information Bottleneck (Zhou et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.173.pdf