StructEval: Deepen and Broaden Large Language Model Assessment via Structured Evaluation

Boxi Cao, Mengjie Ren, Hongyu Lin, Xianpei Han, Feng Zhang, Junfeng Zhan, Le Sun


Abstract
Evaluation is the baton for the development of large language models. Current evaluations typically employ a single-item assessment paradigm for each atomic test objective, which struggle to discern whether a model genuinely possesses the required capabilities or merely memorizes/guesses the answers to specific questions. To this end, this paper proposes a novel evaluation framework referred to as StructEval. Starting from an atomic test objective, StructEval deepens and broadens the evaluation by conducting a structured assessment across multiple cognitive levels and critical concepts, and therefore offers a comprehensive, robust and consistent evaluations for large language models. Experiments on three widely-used benchmarks demonstrate that StructEval serves as a reliable tool for resisting the risk of data contamination, and reducing the interference of potential biases, thereby providing a more reliable and consistent conclusion regarding model capabilities. Our framework also sheds light on the design of future principled and trustworthy LLM evaluation protocols.
Anthology ID:
2024.findings-acl.314
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5300–5318
Language:
URL:
https://aclanthology.org/2024.findings-acl.314
DOI:
Bibkey:
Cite (ACL):
Boxi Cao, Mengjie Ren, Hongyu Lin, Xianpei Han, Feng Zhang, Junfeng Zhan, and Le Sun. 2024. StructEval: Deepen and Broaden Large Language Model Assessment via Structured Evaluation. In Findings of the Association for Computational Linguistics ACL 2024, pages 5300–5318, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
StructEval: Deepen and Broaden Large Language Model Assessment via Structured Evaluation (Cao et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.314.pdf