Visual Grounding Helps Learn Word Meanings in Low-Data Regimes

Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas


Abstract
Modern neural language models (LMs) are powerful tools for modeling human sentence production and comprehension, and their internal representations are remarkably well-aligned with representations of language in the human brain. But to achieve these results, LMs must be trained in distinctly un-human-like ways — requiring orders of magnitude more language data than children receive during development, and without perceptual or social context. Do models trained more naturalistically — with grounded supervision — exhibit more humanlike language learning? We investigate this question in the context of word learning, a key sub-task in language acquisition. We train a diverse set of LM architectures, with and without auxiliary visual supervision, on datasets of varying scales. We then evaluate these models’ learning of syntactic categories, lexical relations, semantic features, word similarity, and alignment with human neural representations. We find that visual supervision can indeed improve the efficiency of word learning. However, these improvements are limited: they are present almost exclusively in the low-dataregime, and sometimes canceled out by the inclusion of rich distributional signals from text. The information conveyed by text and images isnot redundant—models mainly driven by visual information yield qualitatively different from those mainly driven by word co-occurrences. However, our results suggest that current multimodal modeling approaches fail to effectively leverage visual information to build human-like word representations from human-scale data.
Anthology ID:
2024.naacl-long.71
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1311–1329
Language:
URL:
https://aclanthology.org/2024.naacl-long.71
DOI:
Bibkey:
Cite (ACL):
Chengxu Zhuang, Evelina Fedorenko, and Jacob Andreas. 2024. Visual Grounding Helps Learn Word Meanings in Low-Data Regimes. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1311–1329, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes (Zhuang et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.71.pdf
Copyright:
 2024.naacl-long.71.copyright.pdf