Language Model Pretraining and Transfer Learning for Very Low Resource Languages

Jyotsana Khatri, Rudra Murthy, Pushpak Bhattacharyya


Abstract
This paper describes our submission for the shared task on Unsupervised MT and Very Low Resource Supervised MT at WMT 2021. We submitted systems for two language pairs: German ↔ Upper Sorbian (de ↔ hsb) and German-Lower Sorbian (de ↔ dsb). For de ↔ hsb, we pretrain our system using MASS (Masked Sequence to Sequence) objective and then finetune using iterative back-translation. Final finetunng is performed using the parallel data provided for translation objective. For de ↔ dsb, no parallel data is provided in the task, we use final de ↔ hsb model as initialization of the de ↔ dsb model and train it further using iterative back-translation, using the same vocabulary as used in the de ↔ hsb model.
Anthology ID:
2021.wmt-1.106
Volume:
Proceedings of the Sixth Conference on Machine Translation
Month:
November
Year:
2021
Address:
Online
Editors:
Loic Barrault, Ondrej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Alexander Fraser, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Tom Kocmi, Andre Martins, Makoto Morishita, Christof Monz
Venue:
WMT
SIG:
SIGMT
Publisher:
Association for Computational Linguistics
Note:
Pages:
995–998
Language:
URL:
https://aclanthology.org/2021.wmt-1.106
DOI:
Bibkey:
Cite (ACL):
Jyotsana Khatri, Rudra Murthy, and Pushpak Bhattacharyya. 2021. Language Model Pretraining and Transfer Learning for Very Low Resource Languages. In Proceedings of the Sixth Conference on Machine Translation, pages 995–998, Online. Association for Computational Linguistics.
Cite (Informal):
Language Model Pretraining and Transfer Learning for Very Low Resource Languages (Khatri et al., WMT 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.wmt-1.106.pdf