Automating Human Evaluation of Dialogue Systems

Sujan Reddy A


Abstract
Automated metrics to evaluate dialogue systems like BLEU, METEOR, etc., weakly correlate with human judgments. Thus, human evaluation is often used to supplement these metrics for system evaluation. However, human evaluation is time-consuming as well as expensive. This paper provides an alternative approach to human evaluation with respect to three aspects: naturalness, informativeness, and quality in dialogue systems. I propose an approach based on fine-tuning the BERT model with three prediction heads, to predict whether the system-generated output is natural, fluent, and informative. I observe that the proposed model achieves an average accuracy of around 77% over these 3 labels. I also design a baseline approach that uses three different BERT models to make the predictions. Based on experimental analysis, I find that using a shared model to compute the three labels performs better than three separate models.
Anthology ID:
2022.naacl-srw.29
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop
Month:
July
Year:
2022
Address:
Hybrid: Seattle, Washington + Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
229–234
Language:
URL:
https://aclanthology.org/2022.naacl-srw.29
DOI:
10.18653/v1/2022.naacl-srw.29
Bibkey:
Cite (ACL):
Sujan Reddy A. 2022. Automating Human Evaluation of Dialogue Systems. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, pages 229–234, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics.
Cite (Informal):
Automating Human Evaluation of Dialogue Systems (A, NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-srw.29.pdf
Video:
 https://aclanthology.org/2022.naacl-srw.29.mp4