Evaluating How Users Game and Display Conversation with Human-Like Agents

Won Ik Cho, Soomin Kim, Eujeong Choi, Younghoon Jeong


Abstract
Recently, with the advent of high-performance generative language models, artificial agents that communicate directly with the users have become more human-like. This development allows users to perform a diverse range of trials with the agents, and the responses are sometimes displayed online by users who share or show-off their experiences. In this study, we explore dialogues with a social chatbot uploaded to an online community, with the aim of understanding how users game human-like agents and display their conversations. Having done this, we assert that user postings can be investigated from two aspects, namely conversation topic and purpose of testing, and suggest a categorization scheme for the analysis. We analyze 639 dialogues to develop an annotation protocol for the evaluation, and measure the agreement to demonstrate the validity. We find that the dialogue content does not necessarily reflect the purpose of testing, and also that users come up with creative strategies to game the agent without being penalized.
Anthology ID:
2022.codi-1.3
Volume:
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea and Online
Editors:
Chloe Braud, Christian Hardmeier, Junyi Jessy Li, Sharid Loaiciga, Michael Strube, Amir Zeldes
Venue:
CODI
SIG:
Publisher:
International Conference on Computational Linguistics
Note:
Pages:
19–27
Language:
URL:
https://aclanthology.org/2022.codi-1.3
DOI:
Bibkey:
Cite (ACL):
Won Ik Cho, Soomin Kim, Eujeong Choi, and Younghoon Jeong. 2022. Evaluating How Users Game and Display Conversation with Human-Like Agents. In Proceedings of the 3rd Workshop on Computational Approaches to Discourse, pages 19–27, Gyeongju, Republic of Korea and Online. International Conference on Computational Linguistics.
Cite (Informal):
Evaluating How Users Game and Display Conversation with Human-Like Agents (Cho et al., CODI 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.codi-1.3.pdf