%0 Conference Proceedings %T Larger-Scale Transformers for Multilingual Masked Language Modeling %A Goyal, Naman %A Du, Jingfei %A Ott, Myle %A Anantharaman, Giri %A Conneau, Alexis %Y Rogers, Anna %Y Calixto, Iacer %Y Vulić, Ivan %Y Saphra, Naomi %Y Kassner, Nora %Y Camburu, Oana-Maria %Y Bansal, Trapit %Y Shwartz, Vered %S Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021) %D 2021 %8 August %I Association for Computational Linguistics %C Online %F goyal-etal-2021-larger %X Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed and outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests larger capacity models for language understanding may obtain strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available. %R 10.18653/v1/2021.repl4nlp-1.4 %U https://aclanthology.org/2021.repl4nlp-1.4 %U https://doi.org/10.18653/v1/2021.repl4nlp-1.4 %P 29-33