Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in Dialog Systems

David Gros, Yu Li, Zhou Yu


Abstract
Dialog systems are often designed or trained to output human-like responses. However, some responses may be impossible for a machine to truthfully say (e.g. “that movie made me cry”). Highly anthropomorphic responses might make users uncomfortable or implicitly deceive them into thinking they are interacting with a human. We collect human ratings on the feasibility of approximately 900 two-turn dialogs sampled from 9 diverse data sources. Ratings are for two hypothetical machine embodiments: a futuristic humanoid robot and a digital assistant. We find that for some data-sources commonly used to train dialog systems, 20-30% of utterances are not viewed as possible for a machine. Rating is marginally affected by machine embodiment. We explore qualitative and quantitative reasons for these ratings. Finally, we build classifiers and explore how modeling configuration might affect output permissibly, and discuss implications for building less falsely anthropomorphic dialog systems.
Anthology ID:
2022.emnlp-main.215
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3266–3284
Language:
URL:
https://aclanthology.org/2022.emnlp-main.215
DOI:
10.18653/v1/2022.emnlp-main.215
Bibkey:
Cite (ACL):
David Gros, Yu Li, and Zhou Yu. 2022. Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in Dialog Systems. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3266–3284, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in Dialog Systems (Gros et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.215.pdf