T5Score: Discriminative Fine-tuning of Generative Evaluation Metrics

Yiwei Qin, Weizhe Yuan, Graham Neubig, Pengfei Liu


Abstract
Modern embedding-based metrics for evaluation of generated text generally fall into one of two paradigms: discriminative metrics that are trained to directly predict which outputs are of higher quality according to supervised human annotations, and generative metrics that are trained to evaluate text based on the probabilities of a generative model. Both have their advantages; discriminative metrics are able to directly optimize for the problem of distinguishing between good and bad outputs, while generative metrics can be trained using abundant raw text. In this paper, we present a framework that combines the best of both worlds, using both supervised and unsupervised signals from whatever data we have available. We operationalize this idea by training T5Score, a metric that uses these training signals with mT5 as backbone. We perform an extensive empirical comparison with other existing metrics on 5 datasets, 19 languages and 280 systems, demonstrating the utility of our method. Experimental results show that: T5Score achieves the best performance on all datasets against existing top-scoring metrics at the segment level.
Anthology ID:
2023.findings-emnlp.1014
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15185–15202
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.1014
DOI:
10.18653/v1/2023.findings-emnlp.1014
Bibkey:
Cite (ACL):
Yiwei Qin, Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2023. T5Score: Discriminative Fine-tuning of Generative Evaluation Metrics. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15185–15202, Singapore. Association for Computational Linguistics.
Cite (Informal):
T5Score: Discriminative Fine-tuning of Generative Evaluation Metrics (Qin et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.1014.pdf