Xihan Wu
2023
K-UniMorph: Korean Universal Morphology and its Feature Schema
Eunkyul Jo
|
Kim Kyuwon
|
Xihan Wu
|
KyungTae Lim
|
Jungyeul Park
|
Chulwoo Park
Findings of the Association for Computational Linguistics: ACL 2023
We present in this work a new Universal Morphology dataset for Korean. Previously, the Korean language has been underrepresented in the field of morphological paradigms amongst hundreds of diverse world languages. Hence, we propose this Universal Morphological paradigms for the Korean language that preserve its distinct characteristics. For our K-UniMorph dataset, we outline each grammatical criterion in detail for the verbal endings, clarify how to extract inflected forms, and demonstrate how we generate the morphological schemata. This dataset adopts morphological feature schema from CITATION and CITATION for the Korean language as we extract inflected verb forms from the Sejong morphologically analyzed corpus that is one of the largest annotated corpora for Korean. During the data creation, our methodology also includes investigating the correctness of the conversion from the Sejong corpus. Furthermore, we carry out the inflection task using three different Korean word forms: letters, syllables and morphemes. Finally, we discuss and describe future perspectives on Korean morphological paradigms and the dataset.
2022
Impact of Sequence Length and Copying on Clause-Level Inflection
Badr Jaidi
|
Utkarsh Saboo
|
Xihan Wu
|
Garrett Nicolai
|
Miikka Silfverberg
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
We present the University of British Columbia’s submission to the MRL shared task on multilingual clause-level morphology. Our submission extends word-level inflectional models to the clause-level in two ways: first, by evaluating the role that BPE has on the learning of inflectional morphology, and second, by evaluating the importance of a copy bias obtained through data hallucination. Experiments demonstrate a strong preference for language-tuned BPE and a copy bias over a vanilla transformer. The methods are complementary for inflection and analysis tasks – combined models see error reductions of 38% for inflection and 15.6% for analysis; However, this synergy does not hold for reinflection, which performs best under a BPE-only setting. A deeper analysis of the errors generated by our models illustrates that the copy bias may be too strong - the combined model produces predictions more similar to the copy-influenced system, despite the success of the BPE-model.
Search
Co-authors
- Eunkyul Jo 1
- Kim Kyuwon 1
- KyungTae Lim 1
- Jungyeul Park 1
- Chulwoo Park 1
- show all...