Automatic Evaluation of Generative Models with Instruction Tuning

Shuhaib Mehri, Vered Shwartz


Abstract
Automatic evaluation of natural language generation has long been an elusive goal in NLP. A recent paradigm fine-tunes pre-trained language models to emulate human judgements for a particular task and evaluation criterion. Inspired by the generalization ability of instruction-tuned models, we propose a learned metric based on instruction tuning. To test our approach, we collected HEAP, a dataset of human judgements across various NLG tasks and evaluation criteria. Our findings demonstrate that instruction tuning language models on HEAP yields good performance on many evaluation tasks, though some criteria are less trivial to learn than others. Further, jointly training on multiple tasks can yield additional performance improvements, which can be beneficial for future tasks with little to no human annotated data.
Anthology ID:
2023.gem-1.4
Volume:
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Sebastian Gehrmann, Alex Wang, João Sedoc, Elizabeth Clark, Kaustubh Dhole, Khyathi Raghavi Chandu, Enrico Santus, Hooman Sedghamiz
Venues:
GEM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
42–52
Language:
URL:
https://aclanthology.org/2023.gem-1.4
DOI:
Bibkey:
Cite (ACL):
Shuhaib Mehri and Vered Shwartz. 2023. Automatic Evaluation of Generative Models with Instruction Tuning. In Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 42–52, Singapore. Association for Computational Linguistics.
Cite (Informal):
Automatic Evaluation of Generative Models with Instruction Tuning (Mehri & Shwartz, GEM-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.gem-1.4.pdf