Approximating Online Human Evaluation of Social Chatbots with Prompting

Ekaterina Svikhnushina, Pearl Pu


Abstract
With conversational models becoming increasingly available to the general public, developing scalable and robust evaluation metrics is crucial to minimize potential social and psychological risks for the users. Existing evaluation metrics aim to automate offline user evaluation and approximate human judgment of pre-curated dialogs. However, they are limited in their ability to capture subjective perceptions of users who actually interact with the chatbots and might not generalize to real-world settings. To address this limitation, we propose an approach to approximate online human evaluation, leveraging large language models (LLMs) from the GPT-family. We introduce a new Dialog system Evaluation framework based on Prompting (DEP), which enables a fully automatic evaluation pipeline that replicates live user studies and achieves an impressive correlation with human judgment (up to Pearson r=0.95 on a system level). The DEP approach involves collecting synthetic chat logs of evaluated bots with an LLM in the other-play setting, where the LLM is carefully conditioned to follow a specific scenario. We further explore different prompting approaches to produce evaluation scores with the same LLM. The best-performing prompts, which contain few-shot demonstrations and instructions, show outstanding performance on the tested dataset and demonstrate the ability to generalize to other dialog corpora.
Anthology ID:
2023.sigdial-1.25
Volume:
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
Svetlana Stoyanchev, Shafiq Joty, David Schlangen, Ondrej Dusek, Casey Kennington, Malihe Alikhani
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
268–281
Language:
URL:
https://aclanthology.org/2023.sigdial-1.25
DOI:
10.18653/v1/2023.sigdial-1.25
Bibkey:
Cite (ACL):
Ekaterina Svikhnushina and Pearl Pu. 2023. Approximating Online Human Evaluation of Social Chatbots with Prompting. In Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 268–281, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Approximating Online Human Evaluation of Social Chatbots with Prompting (Svikhnushina & Pu, SIGDIAL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.sigdial-1.25.pdf