With Ears to See and Eyes to Hear: Sound Symbolism Experiments with Multimodal Large Language Models

Tyler Loakman, Yucheng Li, Chenghua Lin


Abstract
Recently, Large Language Models (LLMs) and Vision Language Models (VLMs) have demonstrated aptitude as potential substitutes for human participants in experiments testing psycholinguistic phenomena. However, an understudied question is to what extent models that only have access to vision and text modalities are able to implicitly understand sound-based phenomena via abstract reasoning from orthography and imagery alone. To investigate this, we analyse the ability of VLMs and LLMs to demonstrate sound symbolism (i.e., to recognise a non-arbitrary link between sounds and concepts) as well as their ability to “hear” via the interplay of the language and vision modules of open and closed-source multimodal models. We perform multiple experiments, including replicating the classic Kiki-Bouba and Mil-Mal shape and magnitude symbolism tasks and comparing human judgements of linguistic iconicity with that of LLMs. Our results show that VLMs demonstrate varying levels of agreement with human labels, and more task information may be required for VLMs versus their human counterparts for in silico experimentation. We additionally see through higher maximum agreement levels that Magnitude Symbolism is an easier pattern for VLMs to identify than Shape Symbolism, and that an understanding of linguistic iconicity is highly dependent on model size.
Anthology ID:
2024.emnlp-main.167
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2849–2867
Language:
URL:
https://aclanthology.org/2024.emnlp-main.167
DOI:
Bibkey:
Cite (ACL):
Tyler Loakman, Yucheng Li, and Chenghua Lin. 2024. With Ears to See and Eyes to Hear: Sound Symbolism Experiments with Multimodal Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2849–2867, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
With Ears to See and Eyes to Hear: Sound Symbolism Experiments with Multimodal Large Language Models (Loakman et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.167.pdf