Does Vision Accelerate Hierarchical Generalization in Neural Language Learners?

Tatsuki Kuribayashi, Timothy Baldwin


Abstract
Neural language models (LMs) are arguably less data-efficient than humans from a language acquisition perspective. One fundamental question is why this human–LM gap arises. This study explores the advantage of grounded language acquisition, specifically the impact of visual information — which humans can usually rely on but LMs largely do not have access to during language acquisition — on syntactic generalization in LMs. Our experiments, following the poverty of stimulus paradigm under two scenarios (using artificial vs. naturalistic images), demonstrate that if the alignments between the linguistic and visual components are clear in the input, access to vision data does help with the syntactic generalization of LMs, but if not, visual input does not help. This highlights the need for additional biases or signals, such as mutual gaze, to enhance cross-modal alignment and enable efficient syntactic generalization in multimodal LMs.
Anthology ID:
2025.coling-main.127
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1865–1879
Language:
URL:
https://aclanthology.org/2025.coling-main.127/
DOI:
Bibkey:
Cite (ACL):
Tatsuki Kuribayashi and Timothy Baldwin. 2025. Does Vision Accelerate Hierarchical Generalization in Neural Language Learners?. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1865–1879, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Does Vision Accelerate Hierarchical Generalization in Neural Language Learners? (Kuribayashi & Baldwin, COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.127.pdf