Limitations and Challenges of Unsupervised Cross-lingual Pre-training

Martín Quesada Zaragoza, Francisco Casacuberta


Abstract
Cross-lingual alignment methods for monolingual language representations have received notable attention in recent years. However, their use in machine translation pre-training remains scarce. This work tries to shed light on the effects of some of the factors that play a role in cross-lingual pre-training, both for cross-lingual mappings and their integration in supervised neural models. The results show that unsupervised cross-lingual methods are effective at inducing alignment even for distant languages and they benefit noticeably from subword information. However, we find that their effectiveness as pre-training models in machine translation is severely limited due to their cross-lingual signal being easily distorted by the principal network during training. Moreover, the learned bilingual projection is too restrictive to allow said network to learn properly when the embedding weights are frozen.
Anthology ID:
2022.amta-research.13
Volume:
Proceedings of the 15th biennial conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)
Month:
September
Year:
2022
Address:
Orlando, USA
Editors:
Kevin Duh, Francisco Guzmán
Venue:
AMTA
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
175–187
Language:
URL:
https://aclanthology.org/2022.amta-research.13
DOI:
Bibkey:
Cite (ACL):
Martín Quesada Zaragoza and Francisco Casacuberta. 2022. Limitations and Challenges of Unsupervised Cross-lingual Pre-training. In Proceedings of the 15th biennial conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 175–187, Orlando, USA. Association for Machine Translation in the Americas.
Cite (Informal):
Limitations and Challenges of Unsupervised Cross-lingual Pre-training (Quesada Zaragoza & Casacuberta, AMTA 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.amta-research.13.pdf