Does Vision-and-Language Pretraining Improve Lexical Grounding?

Tian Yun, Chen Sun, Ellie Pavlick


Abstract
Linguistic representations derived from text alone have been criticized for their lack of grounding, i.e., connecting words to their meanings in the physical world. Vision-and- Language (VL) models, trained jointly on text and image or video data, have been offered as a response to such criticisms. However, while VL pretraining has shown success on multimodal tasks such as visual question answering, it is not yet known how the internal linguistic representations themselves compare to their text-only counterparts. This paper compares the semantic representations learned via VL vs. text-only pretraining for two recent VL models using a suite of analyses (clustering, probing, and performance on a commonsense question answering task) in a language-only setting. We find that the multimodal models fail to significantly outperform the text-only variants, suggesting that future work is required if multimodal pretraining is to be pursued as a means of improving NLP in general.
Anthology ID:
2021.findings-emnlp.370
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4357–4366
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.370
DOI:
10.18653/v1/2021.findings-emnlp.370
Bibkey:
Cite (ACL):
Tian Yun, Chen Sun, and Ellie Pavlick. 2021. Does Vision-and-Language Pretraining Improve Lexical Grounding?. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4357–4366, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Does Vision-and-Language Pretraining Improve Lexical Grounding? (Yun et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.370.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.370.mp4
Code
 tttyuntian/vlm_lexical_grounding
Data
PIQAWikiHow