Improving the Efficiency of Visually Augmented Language Models

Paula Ontalvilla, Aitor Ormazabal, Gorka Azkune


Abstract
Despite the impressive performance of autoregressive Language Models (LM) it has been shown that due to reporting bias, LMs lack visual knowledge, i.e. they do not know much about the visual world and its properties. To augment LMs with visual knowledge, existing solutions often rely on explicit images, requiring time-consuming retrieval or image generation systems. This paper shows that explicit images are not necessary to visually augment an LM. Instead, we use visually-grounded text representations obtained from the well-known CLIP multimodal system. For a fair comparison, we modify VALM, a visually-augmented LM which uses image retrieval and representation, to work directly with visually-grounded text representations. We name this new model BLIND-VALM. We show that BLIND-VALM performs on par with VALM for Visual Language Understanding (VLU), Natural Language Understanding (NLU) and Language Modeling tasks, despite being significantly more efficient and simpler. We also show that scaling up our model within the compute budget of VALM, either increasing the model or pre-training corpus size, we outperform VALM for all the evaluation tasks.
Anthology ID:
2025.coling-main.343
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5115–5122
Language:
URL:
https://aclanthology.org/2025.coling-main.343/
DOI:
Bibkey:
Cite (ACL):
Paula Ontalvilla, Aitor Ormazabal, and Gorka Azkune. 2025. Improving the Efficiency of Visually Augmented Language Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 5115–5122, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Improving the Efficiency of Visually Augmented Language Models (Ontalvilla et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.343.pdf