Large Language Models for Lexical Resource Enhancement: Multiple Hypernymy Resolution in WordNet

Dimitar Hristov


Abstract
Large language models (LLMs) have materially changed natural language processing (NLP). While LLMs have shifted focus from traditional semantic-based resources, structured linguistic databases such as WordNet remain essential for precise knowledge retrieval, decision making and aiding LLM development. WordNet organizes concepts through synonym sets (synsets) and semantic links but suffers from inconsistencies, including redundant or erroneous relations. This paper investigates an approach using LLMs to aid the refinement of structured language resources, specifically WordNet, by an automation for multiple hypernymy resolution, leveraging the LLMs semantic knowledge to produce tools for aiding and evaluating manual resource improvement.
Anthology ID:
2025.ranlp-stud.3
Volume:
Proceedings of the 9th Student Research Workshop associated with the International Conference Recent Advances in Natural Language Processing
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Boris Velichkov, Ivelina Nikolova-Koleva, Milena Slavcheva
Venues:
RANLP | WS
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
20–26
Language:
URL:
https://aclanthology.org/2025.ranlp-stud.3/
DOI:
Bibkey:
Cite (ACL):
Dimitar Hristov. 2025. Large Language Models for Lexical Resource Enhancement: Multiple Hypernymy Resolution in WordNet. In Proceedings of the 9th Student Research Workshop associated with the International Conference Recent Advances in Natural Language Processing, pages 20–26, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Large Language Models for Lexical Resource Enhancement: Multiple Hypernymy Resolution in WordNet (Hristov, RANLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.ranlp-stud.3.pdf