%0 Conference Proceedings %T bert2BERT: Towards Reusable Pretrained Language Models %A Chen, Cheng %A Yin, Yichun %A Shang, Lifeng %A Jiang, Xin %A Qin, Yujia %A Wang, Fengyu %A Wang, Zhi %A Chen, Xiao %A Liu, Zhiyuan %A Liu, Qun %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F chen-etal-2022-bert2bert %X In recent years, researchers tend to pre-train ever-larger language models to explore the upper limit of deep models. However, large language model pre-training costs intensive computational resources, and most of the models are trained from scratch without reusing the existing pre-trained models, which is wasteful. In this paper, we propose bert2BERT, which can effectively transfer the knowledge of an existing smaller pre-trained model to a large model through parameter initialization and significantly improve the pre-training efficiency of the large model. Specifically, we extend the previous function-preserving method proposed in computer vision on the Transformer-based language model, and further improve it by proposing a novel method, advanced knowledge for large model’s initialization. In addition, a two-stage learning method is proposed to further accelerate the pre-training. We conduct extensive experiments on representative PLMs (e.g., BERT and GPT) and demonstrate that (1) our method can save a significant amount of training cost compared with baselines including learning from scratch, StackBERT and MSLT; (2) our method is generic and applicable to different types of pre-trained models. In particular, bert2BERT saves about 45% and 47% computational cost of pre-training BERT_ BASE and GPT_ BASE by reusing the models of almost their half sizes. %R 10.18653/v1/2022.acl-long.151 %U https://aclanthology.org/2022.acl-long.151 %U https://doi.org/10.18653/v1/2022.acl-long.151 %P 2134-2148