%0 Conference Proceedings %T Bilingual Alignment Pre-Training for Zero-Shot Cross-Lingual Transfer %A Yang, Ziqing %A Ma, Wentao %A Cui, Yiming %A Ye, Jiani %A Che, Wanxiang %A Wang, Shijin %Y Fisch, Adam %Y Talmor, Alon %Y Chen, Danqi %Y Choi, Eunsol %Y Seo, Minjoon %Y Lewis, Patrick %Y Jia, Robin %Y Min, Sewon %S Proceedings of the 3rd Workshop on Machine Reading for Question Answering %D 2021 %8 November %I Association for Computational Linguistics %C Punta Cana, Dominican Republic %F yang-etal-2021-bilingual %X Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning. Some multilingual models such as mBERT, have been pre-trained on unlabeled corpora, therefore the embeddings of different languages in the models may not be aligned very well. In this paper, we aim to improve the zero-shot cross-lingual transfer performance by proposing a pre-training task named Word-Exchange Aligning Model (WEAM), which uses the statistical alignment information as the prior knowledge to guide cross-lingual word prediction. We evaluate our model on multilingual machine reading comprehension task MLQA and natural language interface task XNLI. The results show that WEAM can significantly improve the zero-shot performance. %R 10.18653/v1/2021.mrqa-1.10 %U https://aclanthology.org/2021.mrqa-1.10 %U https://doi.org/10.18653/v1/2021.mrqa-1.10 %P 100-105