Unsupervised Improvement of Factual Knowledge in Language Models

Nafis Sadeq, Byungkyu Kang, Prarit Lamba, Julian McAuley


Abstract
Masked language modeling (MLM) plays a key role in pretraining large language models. But the MLM objective is often dominated by high-frequency words that are sub-optimal for learning factual knowledge. In this work, we propose an approach for influencing MLM pretraining in a way that can improve language model performance on a variety of knowledge-intensive tasks. We force the language model to prioritize informative words in a fully unsupervised way. Experiments demonstrate that the proposed approach can significantly improve the performance of pretrained language models on tasks such as factual recall, question answering, sentiment analysis, and natural language inference in a closed-book setting.
Anthology ID:
2023.eacl-main.215
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2960–2969
Language:
URL:
https://aclanthology.org/2023.eacl-main.215
DOI:
10.18653/v1/2023.eacl-main.215
Bibkey:
Cite (ACL):
Nafis Sadeq, Byungkyu Kang, Prarit Lamba, and Julian McAuley. 2023. Unsupervised Improvement of Factual Knowledge in Language Models. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2960–2969, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Improvement of Factual Knowledge in Language Models (Sadeq et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.215.pdf
Video:
 https://aclanthology.org/2023.eacl-main.215.mp4