AnnIE: An Annotation Platform for Constructing Complete Open Information Extraction Benchmark

Niklas Friedrich, Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, Goran Glavaš


Abstract
Open Information Extraction (OIE) is the task of extracting facts from sentences in the form of relations and their corresponding arguments in schema-free manner. Intrinsic performance of OIE systems is difficult to measure due to the incompleteness of existing OIE benchmarks: ground truth extractions do not group all acceptable surface realizations of the same fact that can be extracted from a sentence. To measure performance of OIE systems more realistically, it is necessary to manually annotate complete facts (i.e., clusters of all acceptable surface realizations of the same fact) from input sentences. We propose AnnIE: an interactive annotation platform that facilitates such challenging annotation tasks and supports creation of complete fact-oriented OIE evaluation benchmarks. AnnIE is modular and flexible in order to support different use case scenarios (i.e., benchmarks covering different types of facts) and different languages. We use AnnIE to build two complete OIE benchmarks: one with verb-mediated facts and another with facts encompassing named entities. We evaluate several OIE systems on our complete benchmarks created with AnnIE. We publicly release AnnIE (and all gold datasets generated with it) under non-restrictive license.
Anthology ID:
2022.acl-demo.5
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
44–60
Language:
URL:
https://aclanthology.org/2022.acl-demo.5
DOI:
10.18653/v1/2022.acl-demo.5
Bibkey:
Cite (ACL):
Niklas Friedrich, Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, and Goran Glavaš. 2022. AnnIE: An Annotation Platform for Constructing Complete Open Information Extraction Benchmark. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 44–60, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
AnnIE: An Annotation Platform for Constructing Complete Open Information Extraction Benchmark (Friedrich et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-demo.5.pdf
Code
 nfriedri/annie-annotation-platform