Ranking Entities along Conceptual Space Dimensions with LLMs: An Analysis of Fine-Tuning Strategies

Nitesh Kumar, Usashi Chatterjee, Steven Schockaert


Abstract
Conceptual spaces represent entities in terms of their primitive semantic features. Such representations are highly valuable but they are notoriously difficult to learn, especially when it comes to modelling perceptual and subjective features. Distilling conceptual spaces from Large Language Models (LLMs) has recently emerged as a promising strategy, but existing work has been limited to probing pre-trained LLMs using relatively simple zero-shot strategies. We focus in particular on the task of ranking entities according to a given conceptual space dimension. Unfortunately, we cannot directly fine-tune LLMs on this task, because ground truth rankings for conceptual space dimensions are rare. We therefore use more readily available features as training data and analyse whether the ranking capabilities of the resulting models transfer to perceptual and subjective features. We find that this is indeed the case, to some extent, but having at least some perceptual and subjective features in the training data seems essential for achieving the best results.
Anthology ID:
2024.findings-acl.474
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7974–7989
Language:
URL:
https://aclanthology.org/2024.findings-acl.474
DOI:
Bibkey:
Cite (ACL):
Nitesh Kumar, Usashi Chatterjee, and Steven Schockaert. 2024. Ranking Entities along Conceptual Space Dimensions with LLMs: An Analysis of Fine-Tuning Strategies. In Findings of the Association for Computational Linguistics ACL 2024, pages 7974–7989, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Ranking Entities along Conceptual Space Dimensions with LLMs: An Analysis of Fine-Tuning Strategies (Kumar et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.474.pdf