InforMask: Unsupervised Informative Masking for Language Model Pretraining

Nafis Sadeq, Canwen Xu, Julian McAuley


Abstract
Masked language modeling is widely used for pretraining large language models for natural language understanding (NLU). However, random masking is suboptimal, allocating an equal masking rate for all tokens. In this paper, we propose InforMask, a new unsupervised masking strategy for training masked language models. InforMask exploits Pointwise Mutual Information (PMI) to select the most informative tokens to mask. We further propose two optimizations for InforMask to improve its efficiency. With a one-off preprocessing step, InforMask outperforms random masking and previously proposed masking strategies on the factual recall benchmark LAMA and the question answering benchmark SQuAD v1 and v2.
Anthology ID:
2022.emnlp-main.395
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5866–5878
Language:
URL:
https://aclanthology.org/2022.emnlp-main.395
DOI:
10.18653/v1/2022.emnlp-main.395
Bibkey:
Cite (ACL):
Nafis Sadeq, Canwen Xu, and Julian McAuley. 2022. InforMask: Unsupervised Informative Masking for Language Model Pretraining. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5866–5878, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
InforMask: Unsupervised Informative Masking for Language Model Pretraining (Sadeq et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.395.pdf