How Do Transformer-Architecture Models Address Polysemy of Korean Adverbial Postpositions?

Seongmin Mun, Guillaume Desagulier


Abstract
Postpositions, which are characterized as multiple form-function associations and thus polysemous, pose a challenge to automatic identification of their usage. Several studies have used contextualized word-embedding models to reveal the functions of Korean postpositions. Despite the superior classification performance of previous studies, the particular reason how these models resolve the polysemy of Korean postpositions is not enough clear. To add more interpretation, for this reason, we devised a classification model by employing two transformer-architecture models—BERT and GPT-2—and introduces a computational simulation that interactively demonstrates how these transformer-architecture models simulate human interpretation of word-level polysemy involving Korean adverbial postpositions -ey, -eyse, and -(u)lo. Results reveal that (i) the BERT model performs better than the GPT-2 model to classify the intended function of postpositions, (ii) there is an inverse relationship between the classification accuracy and the number of functions that each postposition manifests, (iii) model performance is affected by the corpus size of each function, (iv) the models’ performance gradually improves as the epoch proceeds, and (vi) the models are affected by the scarcity of input and/or semantic closeness between the items.
Anthology ID:
2022.deelio-1.2
Volume:
Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures
Month:
May
Year:
2022
Address:
Dublin, Ireland and Online
Venue:
DeeLIO
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–21
Language:
URL:
https://aclanthology.org/2022.deelio-1.2
DOI:
10.18653/v1/2022.deelio-1.2
Bibkey:
Cite (ACL):
Seongmin Mun and Guillaume Desagulier. 2022. How Do Transformer-Architecture Models Address Polysemy of Korean Adverbial Postpositions?. In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 11–21, Dublin, Ireland and Online. Association for Computational Linguistics.
Cite (Informal):
How Do Transformer-Architecture Models Address Polysemy of Korean Adverbial Postpositions? (Mun & Desagulier, DeeLIO 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.deelio-1.2.pdf
Software:
 2022.deelio-1.2.software.zip
Video:
 https://aclanthology.org/2022.deelio-1.2.mp4