KOMBO: Korean Character Representations Based on the Combination Rules of Subcharacters

SungHo Kim, Juhyeong Park, Yeachan Kim, SangKeun Lee


Abstract
The Korean writing system, Hangeul, has a unique character representation rigidly following the invention principles recorded in Hunminjeongeum. However, existing pre-trained language models (PLMs) for Korean have overlooked these principles. In this paper, we introduce a novel framework for Korean PLMs called KOMBO, which firstly brings the invention principles of Hangeul to represent character. Our proposed method, KOMBO, exhibits notable experimental proficiency across diverse NLP tasks. In particular, our method outperforms the state-of-the-art Korean PLM by an average of 2.11% in five Korean natural language understanding tasks. Furthermore, extensive experiments demonstrate that our proposed method is suitable for comprehending the linguistic features of the Korean language. Consequently, we shed light on the superiority of using subcharacters over the typical subword-based approach for Korean PLMs. Our code is available at: https://github.com/SungHo3268/KOMBO.
Anthology ID:
2024.findings-acl.302
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5102–5119
Language:
URL:
https://aclanthology.org/2024.findings-acl.302
DOI:
Bibkey:
Cite (ACL):
SungHo Kim, Juhyeong Park, Yeachan Kim, and SangKeun Lee. 2024. KOMBO: Korean Character Representations Based on the Combination Rules of Subcharacters. In Findings of the Association for Computational Linguistics ACL 2024, pages 5102–5119, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
KOMBO: Korean Character Representations Based on the Combination Rules of Subcharacters (Kim et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.302.pdf