Preserving Cross-Linguality of Pre-trained Models via Continual Learning

Zihan Liu, Genta Indra Winata, Andrea Madotto, Pascale Fung


Abstract
Recently, fine-tuning pre-trained language models (e.g., multilingual BERT) to downstream cross-lingual tasks has shown promising results. However, the fine-tuning process inevitably changes the parameters of the pre-trained model and weakens its cross-lingual ability, which leads to sub-optimal performance. To alleviate this problem, we leverage continual learning to preserve the original cross-lingual ability of the pre-trained model when we fine-tune it to downstream tasks. The experimental result shows that our fine-tuning methods can better preserve the cross-lingual ability of the pre-trained model in a sentence retrieval task. Our methods also achieve better performance than other fine-tuning baselines on the zero-shot cross-lingual part-of-speech tagging and named entity recognition tasks.
Anthology ID:
2021.repl4nlp-1.8
Volume:
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Anna Rogers, Iacer Calixto, Ivan Vulić, Naomi Saphra, Nora Kassner, Oana-Maria Camburu, Trapit Bansal, Vered Shwartz
Venue:
RepL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
64–71
Language:
URL:
https://aclanthology.org/2021.repl4nlp-1.8
DOI:
10.18653/v1/2021.repl4nlp-1.8
Bibkey:
Cite (ACL):
Zihan Liu, Genta Indra Winata, Andrea Madotto, and Pascale Fung. 2021. Preserving Cross-Linguality of Pre-trained Models via Continual Learning. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 64–71, Online. Association for Computational Linguistics.
Cite (Informal):
Preserving Cross-Linguality of Pre-trained Models via Continual Learning (Liu et al., RepL4NLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.repl4nlp-1.8.pdf