LTRS: Improving Word Sense Disambiguation via Learning to Rank Senses

Hansi Wang, Yue Wang, Qiliang Liang, Yang Liu


Abstract
Word Sense Disambiguation (WSD) is a fundamental task critical for accurate semantic understanding. Conventional training strategies usually only consider predefined senses for target words and learn each of them from relatively limited instances, neglecting the influence of similar ones. To address these problems, we propose the method of Learning to Rank Senses (LTRS) to enhance the task. This method helps a model learn to represent and disambiguate senses from a broadened range of instances via ranking an expanded list of sense definitions. By employing LTRS, our model achieves a SOTA F1 score of 79.6% in Chinese WSD and exhibits robustness in low-resource settings. Moreover, it shows excellent training efficiency, achieving faster convergence than previous methods. This provides a new technical approach to WSD and may also apply to the task for other languages.
Anthology ID:
2025.coling-main.132
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1934–1942
Language:
URL:
https://aclanthology.org/2025.coling-main.132/
DOI:
Bibkey:
Cite (ACL):
Hansi Wang, Yue Wang, Qiliang Liang, and Yang Liu. 2025. LTRS: Improving Word Sense Disambiguation via Learning to Rank Senses. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1934–1942, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
LTRS: Improving Word Sense Disambiguation via Learning to Rank Senses (Wang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.132.pdf