How Well Do Large Language Models Perform on Faux Pas Tests?

Natalie Shapira, Guy Zwirn, Yoav Goldberg


Abstract
Motivated by the question of the extent to which large language models “understand” social intelligence, we investigate the ability of such models to generate correct responses to questions involving descriptions of faux pas situations. The faux pas test is a test used in clinical psychology, which is known to be more challenging for children than individual tests of theory-of-mind or social intelligence. Our results demonstrate that, while the models seem to sometimes offer correct responses, they in fact struggle with this task, and that many of the seemingly correct responses can be attributed to over-interpretation by the human reader (“the ELIZA effect”). An additional phenomenon observed is the failure of most models to generate a correct response to presupposition questions. Finally, in an experiment in which the models are tasked with generating original faux pas stories, we find that while some models are capable of generating novel faux pas stories, the stories are all explicit, as the models are limited in their abilities to describe situations in an implicit manner.
Anthology ID:
2023.findings-acl.663
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10438–10451
Language:
URL:
https://aclanthology.org/2023.findings-acl.663
DOI:
10.18653/v1/2023.findings-acl.663
Bibkey:
Cite (ACL):
Natalie Shapira, Guy Zwirn, and Yoav Goldberg. 2023. How Well Do Large Language Models Perform on Faux Pas Tests?. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10438–10451, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
How Well Do Large Language Models Perform on Faux Pas Tests? (Shapira et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.663.pdf