Improving Open Information Extraction via Iterative Rank-Aware Learning

Zhengbao Jiang, Pengcheng Yin, Graham Neubig


Abstract
Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences. We propose an additional binary classification loss to calibrate the likelihood to make it more globally comparable, and an iterative learning process, where extractions generated by the open IE model are incrementally included as training samples to help the model learn from trial and error. Experiments on OIE2016 demonstrate the effectiveness of our method. Code and data are available at https://github.com/jzbjyb/oie_rank.
Anthology ID:
P19-1523
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5295–5300
Language:
URL:
https://aclanthology.org/P19-1523
DOI:
10.18653/v1/P19-1523
Bibkey:
Cite (ACL):
Zhengbao Jiang, Pengcheng Yin, and Graham Neubig. 2019. Improving Open Information Extraction via Iterative Rank-Aware Learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5295–5300, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Improving Open Information Extraction via Iterative Rank-Aware Learning (Jiang et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1523.pdf
Code
 jzbjyb/oie_rank