IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models

Chenguang Wang, Xiao Liu, Dawn Song


Abstract
We introduce a new open information extraction (OIE) benchmark for pre-trained language models (LM). Recent studies have demonstrated that pre-trained LMs, such as BERT and GPT, may store linguistic and relational knowledge. In particular, LMs are able to answer “fill-in-the-blank” questions when given a pre-defined relation category. Instead of focusing on pre-defined relations, we create an OIE benchmark aiming to fully examine the open relational information present in the pre-trained LMs. We accomplish this by turning pre-trained LMs into zero-shot OIE systems. Surprisingly, pre-trained LMs are able to obtain competitive performance on both standard OIE datasets (CaRB and Re-OIE2016) and two new large-scale factual OIE datasets (TAC KBP-OIE and Wikidata-OIE) that we establish via distant supervision. For instance, the zero-shot pre-trained LMs outperform the F1 score of the state-of-the-art supervised OIE methods on our factual OIE datasets without needing to use any training sets.
Anthology ID:
2022.emnlp-main.576
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8417–8437
Language:
URL:
https://aclanthology.org/2022.emnlp-main.576
DOI:
10.18653/v1/2022.emnlp-main.576
Bibkey:
Cite (ACL):
Chenguang Wang, Xiao Liu, and Dawn Song. 2022. IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8417–8437, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models (Wang et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.576.pdf