Evaluating Dialogue Generation Systems via Response Selection

Shiki Sato, Reina Akama, Hiroki Ouchi, Jun Suzuki, Kentaro Inui


Abstract
Existing automatic evaluation metrics for open-domain dialogue response generation systems correlate poorly with human evaluation. We focus on evaluating response generation systems via response selection. To evaluate systems properly via response selection, we propose a method to construct response selection test sets with well-chosen false candidates. Specifically, we propose to construct test sets filtering out some types of false candidates: (i) those unrelated to the ground-truth response and (ii) those acceptable as appropriate responses. Through experiments, we demonstrate that evaluating systems via response selection with the test set developed by our method correlates more strongly with human evaluation, compared with widely used automatic evaluation metrics such as BLEU.
Anthology ID:
2020.acl-main.55
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
593–599
Language:
URL:
https://aclanthology.org/2020.acl-main.55
DOI:
10.18653/v1/2020.acl-main.55
Bibkey:
Cite (ACL):
Shiki Sato, Reina Akama, Hiroki Ouchi, Jun Suzuki, and Kentaro Inui. 2020. Evaluating Dialogue Generation Systems via Response Selection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 593–599, Online. Association for Computational Linguistics.
Cite (Informal):
Evaluating Dialogue Generation Systems via Response Selection (Sato et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.55.pdf
Video:
 http://slideslive.com/38928930
Code
 cl-tohoku/eval-via-selection
Data
DailyDialogDoubanDouban Conversation Corpus