C. M. Downey

Also published as: C.m. Downey


2024

pdf bib
Targeted Multilingual Adaptation for Low-resource Language Families
C. M. Downey | Terra Blevins | Dhwani Serai | Dwija Parikh | Shane Steinert-Threlkeld
Findings of the Association for Computational Linguistics: EMNLP 2024

Massively multilingual models are known to have limited utility in any one language, and to perform particularly poorly on low-resource languages. By contrast, targeted multinguality has been shown to benefit low-resource languages. To test this approach more rigorously, we systematically study best practices for adapting a pre-trained model to a language family. Focusing on the Uralic family as a test case, we adapt XLM-R under various configurations to model 15 languages; we then evaluate the performance of each experimental setting on two downstream tasks and 11 evaluation languages. Our adapted models significantly outperform mono- and multilingual baselines. A regression analysis reveals that adapted vocabulary size is relatively unimportant for low-resource languages, and that low-resource languages can be aggressively up-sampled during training at little detriment to performance in high-resource languages. These results introduce new best practices for performing language adaptation in a targeted setting.

2023

pdf bib
Learning to translate by learning to communicate
C.m. Downey | Xuhui Zhou | Zeyu Liu | Shane Steinert-Threlkeld
Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)

pdf bib
Embedding Structure Matters: Comparing Methods to Adapt Multilingual Vocabularies to New Languages
C.m. Downey | Terra Blevins | Nora Goldfine | Shane Steinert-Threlkeld
Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)

2022

pdf bib
A Masked Segmental Language Model for Unsupervised Natural Language Segmentation
C.m. Downey | Fei Xia | Gina-Anne Levow | Shane Steinert-Threlkeld
Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology

We introduce a Masked Segmental Language Model (MSLM) for joint language modeling and unsupervised segmentation. While near-perfect supervised methods have been developed for segmenting human-like linguistic units in resource-rich languages such as Chinese, many of the world’s languages are both morphologically complex, and have no large dataset of “gold” segmentations for supervised training. Segmental Language Models offer a unique approach by conducting unsupervised segmentation as the byproduct of a neural language modeling objective. However, current SLMs are limited in their scalability due to their recurrent architecture. We propose a new type of SLM for use in both unsupervised and lightly supervised segmentation tasks. The MSLM is built on a span-masking transformer architecture, harnessing a masked bidirectional modeling context and attention, as well as adding the potential for model scalability. In a series of experiments, our model outperforms the segmentation quality of recurrent SLMs on Chinese, and performs similarly to the recurrent model on English.