HumanRankEval: Automatic Evaluation of LMs as Conversational Assistants

Milan Gritta, Gerasimos Lampouras, Ignacio Iacobacci


Abstract
Language models (LMs) as conversational assistants recently became popular tools that help people accomplish a variety of tasks. These typically result from adapting LMs pretrained on general domain text sequences through further instruction-tuning and possibly preference optimisation methods. The evaluation of such LMs would ideally be performed using human judgement, however, this is not scalable. On the other hand, automatic evaluation featuring auxiliary LMs as judges and/or knowledge-based tasks is scalable but struggles with assessing conversational ability and adherence to instructions. To help accelerate the development of LMs as conversational assistants, we propose a novel automatic evaluation task: HumanRankEval (HRE). It consists of a large-scale, diverse and high-quality set of questions, each with several answers authored and scored by humans. To perform evaluation, HRE ranks these answers based on their log-likelihood under the LM’s distribution, and subsequently calculates their correlation with the corresponding human rankings. We support HRE’s efficacy by investigating how efficiently it separates pretrained and instruction-tuned LMs of various sizes. We show that HRE correlates well with human judgements and is particularly responsive to model changes following instruction-tuning.
Anthology ID:
2024.naacl-long.456
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8230–8242
Language:
URL:
https://aclanthology.org/2024.naacl-long.456
DOI:
Bibkey:
Cite (ACL):
Milan Gritta, Gerasimos Lampouras, and Ignacio Iacobacci. 2024. HumanRankEval: Automatic Evaluation of LMs as Conversational Assistants. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8230–8242, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
HumanRankEval: Automatic Evaluation of LMs as Conversational Assistants (Gritta et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.456.pdf
Copyright:
 2024.naacl-long.456.copyright.pdf