An Ensembled Encoder-Decoder System for Interlinear Glossed Text

Edith Coates


Abstract
This paper presents my submission to Track 1 of the 2023 SIGMORPHON shared task on interlinear glossed text (IGT). There are a wide amount of techniques for building and training IGT models (see Moeller and Hulden, 2018; McMillan-Major, 2020; Zhao et al., 2020). I describe my ensembled sequence-to-sequence approach, perform experiments, and share my submission’s test-set accuracy. I also discuss future areas of research in low-resource token classification methods for IGT.
Anthology ID:
2023.sigmorphon-1.23
Volume:
Proceedings of the 20th SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Garrett Nicolai, Eleanor Chodroff, Frederic Mailhot, Çağrı Çöltekin
Venue:
SIGMORPHON
SIG:
SIGMORPHON
Publisher:
Association for Computational Linguistics
Note:
Pages:
217–221
Language:
URL:
https://aclanthology.org/2023.sigmorphon-1.23
DOI:
10.18653/v1/2023.sigmorphon-1.23
Bibkey:
Cite (ACL):
Edith Coates. 2023. An Ensembled Encoder-Decoder System for Interlinear Glossed Text. In Proceedings of the 20th SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 217–221, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
An Ensembled Encoder-Decoder System for Interlinear Glossed Text (Coates, SIGMORPHON 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.sigmorphon-1.23.pdf