Polysemy Interpretation and Transformer Language Models: A Case of Korean Adverbial Postposition -(u)lo

Seongmin Mun, Gyu-Ho Shin


Abstract
This study examines how Transformer language models utilise lexico-phrasal information to interpret the polysemy of the Korean adverbial postposition -(u)lo. We analysed the attention weights of both a Korean pre-trained BERT model and a fine-tuned version. Results show a general reduction in attention weights following fine-tuning, alongside changes in the lexico-phrasal information used, depending on the specific function of -(u)lo. These findings suggest that, while fine-tuning broadly affects a model’s syntactic sensitivity, it may also alter its capacity to leverage lexico-phrasal features according to the function of the target word.
Anthology ID:
2025.coling-main.105
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1555–1561
Language:
URL:
https://aclanthology.org/2025.coling-main.105/
DOI:
Bibkey:
Cite (ACL):
Seongmin Mun and Gyu-Ho Shin. 2025. Polysemy Interpretation and Transformer Language Models: A Case of Korean Adverbial Postposition -(u)lo. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1555–1561, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Polysemy Interpretation and Transformer Language Models: A Case of Korean Adverbial Postposition -(u)lo (Mun & Shin, COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.105.pdf