2022
pdf
bib
abs
Improving Low-resource RRG Parsing with Cross-lingual Self-training
Kilian Evang
|
Laura Kallmeyer
|
Jakub Waszczuk
|
Kilu von Prince
|
Tatiana Bladier
|
Simon Petitjean
Proceedings of the 29th International Conference on Computational Linguistics
This paper considers the task of parsing low-resource languages in a scenario where parallel English data and also a limited seed of annotated sentences in the target language are available, as for example in bootstrapping parallel treebanks. We focus on constituency parsing using Role and Reference Grammar (RRG), a theory that has so far been understudied in computational linguistics but that is widely used in typological research, i.e., in particular in the context of low-resource languages. Starting from an existing RRG parser, we propose two strategies for low-resource parsing: first, we extend the parsing model into a cross-lingual parser, exploiting the parallel data in the high-resource language and unsupervised word alignments by providing internal states of the source-language parser to the target-language parser. Second, we adopt self-training, thereby iteratively expanding the training data, starting from the seed, by including the most confident new parses in each round. Both in simulated scenarios and with a real low-resource language (Daakaka), we find substantial and complementary improvements from both self-training and cross-lingual parsing. Moreover, we also experimented with using gloss embeddings in addition to token embeddings in the target language, and this also improves results. Finally, starting from what we have for Daakaka, we also consider parsing a related language (Dalkalaen) where glosses and English translations are available but no annotated trees at all, i.e., a no-resource scenario wrt. syntactic annotations. We start with cross-lingual parser trained on Daakaka with glosses and use self-training to adapt it to Dalkalaen. The results are surprisingly good.
2020
pdf
bib
abs
An Empirical Evaluation of Annotation Practices in Corpora from Language Documentation
Kilu von Prince
|
Sebastian Nordhoff
Proceedings of the Twelfth Language Resources and Evaluation Conference
For most of the world’s languages, no primary data are available, even as many languages are disappearing. Throughout the last two decades, however, language documentation projects have produced substantial amounts of primary data from a wide variety of endangered languages. These resources are still in the early days of their exploration. One of the factors that makes them hard to use is a relative lack of standardized annotation conventions. In this paper, we will describe common practices in existing corpora in order to facilitate their future processing. After a brief introduction of the main formats used for annotation files, we will focus on commonly used tiers in the widespread ELAN and Toolbox formats. Minimally, corpora from language documentation contain a transcription tier and an aligned translation tier, which means they constitute parallel corpora. Additional common annotations include named references, morpheme separation, morpheme-by-morpheme glosses, part-of-speech tags and notes.
2019
pdf
bib
abs
Tagging modality in Oceanic languages of Melanesia
Annika Tjuka
|
Lena Weißmann
|
Kilu von Prince
Proceedings of the 13th Linguistic Annotation Workshop
Primary data from small, low-resource languages of Oceania have only recently become available through language documentation. In our study, we explore corpus data of five Oceanic languages of Melanesia which are known to be mood-prominent (in the sense of Bhat, 1999). In order to find out more about tense, aspect, modality, and polarity, we tagged these categories in a subset of our corpora. For the category of modality, we developed a novel tag set (MelaTAMP, 2017), which categorizes clauses into factual, possible, and counterfactual. Based on an analysis of the inter-annotator consistency, we argue that our tag set for the modal domain is efficient for our subject languages and might be useful for other languages and purposes.
2018
pdf
bib
abs
Using Universal Dependencies in cross-linguistic complexity research
Aleksandrs Berdicevskis
|
Çağrı Çöltekin
|
Katharina Ehret
|
Kilu von Prince
|
Daniel Ross
|
Bill Thompson
|
Chunxiao Yan
|
Vera Demberg
|
Gary Lupyan
|
Taraka Rama
|
Christian Bentz
Proceedings of the Second Workshop on Universal Dependencies (UDW 2018)
We evaluate corpus-based measures of linguistic complexity obtained using Universal Dependencies (UD) treebanks. We propose a method of estimating robustness of the complexity values obtained using a given measure and a given treebank. The results indicate that measures of syntactic complexity might be on average less robust than those of morphological complexity. We also estimate the validity of complexity measures by comparing the results for very similar languages and checking for unexpected differences. We show that some of those differences that arise can be diminished by using parallel treebanks and, more importantly from the practical point of view, by harmonizing the language-specific solutions in the UD annotation.