Probing the Robustness of Trained Metrics for Conversational Dialogue Systems

Jan Deriu, Don Tuggener, Pius Von Däniken, Mark Cieliebak


Abstract
This paper introduces an adversarial method to stress-test trained metrics for the evaluation of conversational dialogue systems. The method leverages Reinforcement Learning to find response strategies that elicit optimal scores from the trained metrics. We apply our method to test recently proposed trained metrics. We find that they all are susceptible to giving high scores to responses generated by rather simple and obviously flawed strategies that our method converges on. For instance, simply copying parts of the conversation context to form a response yields competitive scores or even outperforms responses written by humans.
Anthology ID:
2022.acl-short.85
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
750–761
Language:
URL:
https://aclanthology.org/2022.acl-short.85
DOI:
10.18653/v1/2022.acl-short.85
Bibkey:
Cite (ACL):
Jan Deriu, Don Tuggener, Pius Von Däniken, and Mark Cieliebak. 2022. Probing the Robustness of Trained Metrics for Conversational Dialogue Systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 750–761, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Probing the Robustness of Trained Metrics for Conversational Dialogue Systems (Deriu et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-short.85.pdf
Software:
 2022.acl-short.85.software.zip
Code
 jderiu/metric-robustness
Data
DailyDialog