CONTOR: Benchmarking Strategies for Completing Ontologies with Plausible Missing Rules

Na Li, Thomas Bailleux, Zied Bouraoui, Steven Schockaert


Abstract
We consider the problem of finding plausible rules that are missing from a given ontology. A number of strategies for this problem have already been considered in the literature. Little is known about the relative performance of these strategies, however, as they have thus far been evaluated on different ontologies. Moreover, existing evaluations have focused on distinguishing held-out ontology rules from randomly corrupted ones, which often makes the task unrealistically easy and leads to the presence of incorrectly labelled negative examples. To address these concerns, we introduce a benchmark with manually annotated hard negatives and use this benchmark to evaluate ontology completion models. In addition to previously proposed models, we test the effectiveness of several approaches that have not yet been considered for this task, including LLMs and simple but effective hybrid strategies.
Anthology ID:
2024.findings-emnlp.488
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8316–8334
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.488
DOI:
Bibkey:
Cite (ACL):
Na Li, Thomas Bailleux, Zied Bouraoui, and Steven Schockaert. 2024. CONTOR: Benchmarking Strategies for Completing Ontologies with Plausible Missing Rules. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 8316–8334, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
CONTOR: Benchmarking Strategies for Completing Ontologies with Plausible Missing Rules (Li et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.488.pdf