Do LLMs Agree with Humans on Emotional Associations to Nonsense Words?

Yui Miyakawa, Chihaya Matsuhira, Hirotaka Kato, Takatsugu Hirayama, Takahiro Komamizu, Ichiro Ide


Abstract
Understanding human perception of nonsense words is helpful to devise product and character names that match their characteristics. Previous studies have suggested the usefulness of Large Language Models (LLMs) for estimating such human perception, but they did not focus on its emotional aspects. Hence, this study aims to elucidate the relationship of emotions evoked by nonsense words between humans and LLMs. Using a representative LLM, GPT-4, we reproduce the procedure of an existing study to analyze evoked emotions of humans for nonsense words. A positive correlation of 0.40 was found between the emotion intensity scores reproduced by GPT-4 and those manually annotated by humans. Although the correlation is not very high, this demonstrates that GPT-4 may agree with humans on emotional associations to nonsense words. Considering that the previous study reported that the correlation among human annotators was about 0.68 on average and that between a regression model trained on the annotations for real words and humans was 0.17, GPT-4’s agreement with humans is notably strong.
Anthology ID:
2024.cmcl-1.7
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Yohei Oseki
Venues:
CMCL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
81–85
Language:
URL:
https://aclanthology.org/2024.cmcl-1.7
DOI:
Bibkey:
Cite (ACL):
Yui Miyakawa, Chihaya Matsuhira, Hirotaka Kato, Takatsugu Hirayama, Takahiro Komamizu, and Ichiro Ide. 2024. Do LLMs Agree with Humans on Emotional Associations to Nonsense Words?. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 81–85, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Do LLMs Agree with Humans on Emotional Associations to Nonsense Words? (Miyakawa et al., CMCL-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.cmcl-1.7.pdf