Data Efficient Masked Language Modeling for Vision and Language

Yonatan Bitton, Michael Elhadad, Gabriel Stanovsky, Roy Schwartz


Abstract
Masked language modeling (MLM) is one of the key sub-tasks in vision-language pretraining. In the cross-modal setting, tokens in the sentence are masked at random, and the model predicts the masked tokens given the image and the text. In this paper, we observe several key disadvantages of MLM in this setting. First, as captions tend to be short, in a third of the sentences no token is sampled. Second, the majority of masked tokens are stop-words and punctuation, leading to under-utilization of the image. We investigate a range of alternative masking strategies specific to the cross-modal setting that address these shortcomings, aiming for better fusion of text and image in the learned representation. When pre-training the LXMERT model, our alternative masking strategies consistently improve over the original masking strategy on three downstream tasks, especially in low resource settings. Further, our pre-training approach substantially outperforms the baseline model on a prompt-based probing task designed to elicit image objects. These results and our analysis indicate that our method allows for better utilization of the training data.
Anthology ID:
2021.findings-emnlp.259
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3013–3028
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.259
DOI:
10.18653/v1/2021.findings-emnlp.259
Bibkey:
Cite (ACL):
Yonatan Bitton, Michael Elhadad, Gabriel Stanovsky, and Roy Schwartz. 2021. Data Efficient Masked Language Modeling for Vision and Language. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3013–3028, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Data Efficient Masked Language Modeling for Vision and Language (Bitton et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.259.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.259.mp4
Code
 yonatanbitton/data_efficient_masked_language_modeling_for_vision_and_language
Data
GQAVisual Genome