Language Model Transformers as Evaluators for Open-domain Dialogues

Rostislav Nedelchev, Jens Lehmann, Ricardo Usbeck


Abstract
Computer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.
Anthology ID:
2020.coling-main.599
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
6797–6808
Language:
URL:
https://aclanthology.org/2020.coling-main.599
DOI:
10.18653/v1/2020.coling-main.599
Bibkey:
Cite (ACL):
Rostislav Nedelchev, Jens Lehmann, and Ricardo Usbeck. 2020. Language Model Transformers as Evaluators for Open-domain Dialogues. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6797–6808, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Language Model Transformers as Evaluators for Open-domain Dialogues (Nedelchev et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.599.pdf
Code
 smartdataanalytics/transformers_dialogue_evaluators
Data
ConvAI2