How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?

Lovisa Hagström, Richard Johansson


Abstract
Current language models have been criticised for learning language from text alone without connection between words and their meaning. Consequently, multimodal training has been proposed as a way for creating models with better language understanding by providing the lacking connection. We focus on pre-trained multimodal vision-and-language (VL) models for which there already are some results on their language understanding capabilities. An unresolved issue with evaluating the linguistic skills of these models, however, is that there is no established method for adapting them to text-only input without out-of-distribution uncertainty. To find the best approach, we investigate and compare seven possible methods for adapting three different pre-trained VL models to text-only input. Our evaluations on both GLUE and Visual Property Norms (VPN) show that care should be put into adapting VL models to zero-shot text-only tasks, while the models are less sensitive to how we adapt them to non-zero-shot tasks. We also find that the adaptation methods perform differently for different models and that unimodal model counterparts perform on par with the VL models regardless of adaptation, indicating that current VL models do not necessarily gain better language understanding from their multimodal training.
Anthology ID:
2022.coling-1.494
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5582–5596
Language:
URL:
https://aclanthology.org/2022.coling-1.494
DOI:
Bibkey:
Cite (ACL):
Lovisa Hagström and Richard Johansson. 2022. How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5582–5596, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input? (Hagström & Johansson, COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.494.pdf
Code
 lovhag/adapt-pre-trained-vl-models-to-text
Data
GLUEMS COCOQNLI