Zikang Zhang
2025
A Survey of Generative Information Extraction
Zikang Zhang
|
Wangjie You
|
Tianci Wu
|
Xinrui Wang
|
Juntao Li
|
Min Zhang
Proceedings of the 31st International Conference on Computational Linguistics
Generative information extraction (Generative IE) aims to generate structured text sequences from unstructured text using a generative framework. Scaling in model size yields variations in adaptation and generalization, and also drives fundamental shifts in the techniques and approaches used within this domain. In this survey, we first review generative information extraction (IE) methods based on pre-trained language models (PLMs) and large language models (LLMs), focusing on their adaptation and generalization capabilities. We also discuss the connection between these methods and these two aspects. Furthermore, to balance task performance with the substantial computational demands associated with LLMs, we emphasize the importance of model collaboration. Finally, given the advanced capabilities of LLMs, we explore methods for integrating diverse IE tasks into unified models.
2023
Efficient Continue Training of Temporal Language Model with Structural Information
Zhaochen Su
|
Juntao Li
|
Zikang Zhang
|
Zihan Zhou
|
Min Zhang
Findings of the Association for Computational Linguistics: EMNLP 2023
Current language models are mainly trained on snap-shots of data gathered at a particular time, which decreases their capability to generalize over time and model language change. To model the time variable, existing works have explored temporal language models (e.g., TempoBERT) by directly incorporating the timestamp into the training process. While effective to some extent, these methods are limited by the superficial temporal information brought by timestamps, which fails to learn the inherent changes of linguistic components. In this paper, we empirically confirm that the performance of pre-trained language models (PLMs) is closely affiliated with syntactically changed tokens. Based on this observation, we propose a simple yet effective method named Syntax-Guided Temporal Language Model (SG-TLM), which could learn the inherent language changes by capturing an intrinsic relationship between the time prefix and the tokens with salient syntactic change. Experiments on two datasets and three tasks demonstrate that our model outperforms existing temporal language models in both memorization and generalization capabilities. Extensive results further confirm the effectiveness of our approach across different model frameworks, including both encoder-only and decoder-only models (e.g., LLaMA). Our code is available at https://github.com/zhaochen0110/TempoLM.
Search
Fix data
Co-authors
- Juntao Li 2
- Min Zhang (张民) 2
- Zhaochen Su 1
- Xinrui Wang 1
- Tianci Wu 1
- show all...