BERT or GPT: why not both?

Lucas Georges Gabriel Charpentier, David Samuel


Abstract
We present a simple way to merge masked language modeling with causal language modeling. This hybrid training objective results in a model that combines the strengths of both modeling paradigms within a single transformer stack – GPT-BERT can be transparently used like any standard causal or masked language model. We test the pretraining process that enables this flexible behavior on the BabyLM Challenge 2024. The results show that the hybrid pretraining outperforms masked-only or causal-only models. We openly release the models, training corpora and code.
Anthology ID:
2024.conll-babylm.24
Volume:
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
Month:
November
Year:
2024
Address:
Miami, FL, USA
Editors:
Michael Y. Hu, Aaron Mueller, Candace Ross, Adina Williams, Tal Linzen, Chengxu Zhuang, Leshem Choshen, Ryan Cotterell, Alex Warstadt, Ethan Gotlieb Wilcox
Venues:
CoNLL | BabyLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
262–283
Language:
URL:
https://aclanthology.org/2024.conll-babylm.24/
DOI:
Bibkey:
Cite (ACL):
Lucas Georges Gabriel Charpentier and David Samuel. 2024. BERT or GPT: why not both?. In The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning, pages 262–283, Miami, FL, USA. Association for Computational Linguistics.
Cite (Informal):
BERT or GPT: why not both? (Charpentier & Samuel, CoNLL-BabyLM 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.conll-babylm.24.pdf