SynthEval: Hybrid Behavioral Testing of NLP Models with Synthetic Evaluation

Raoyuan Zhao, Abdullatif Köksal, Yihong Liu, Leonie Weissweiler, Anna Korhonen, Hinrich Schuetze


Abstract
Traditional benchmarking in NLP typically involves using static, held-out test sets and calculating aggregated statistics based on diverse examples. However, this approach often results in an overestimation of performance and lacks the ability to offer comprehensive, interpretable, and dynamic assessments of NLP models. Recently, works like DynaBench and Checklist have addressed these limitations through behavioral testing of NLP models with test types generated by a multi-step human-annotated pipeline. Unfortunately, manually creating a variety of test types requires significant human labor, thus weakening efficiency. In this work, we propose SynthEval, a hybrid behavioral testing framework that leverages large language models (LLMs) to generate a wide range of test types for a comprehensive evaluation of NLP models. The SynthEval framework first generates sentences via LLMs using controlled generation, and then identifies challenging examples by comparing the predictions made by LLMs with task-specific NLP models. In the last stage, human experts investigate the challenging examples, manually design templates, and identify the types of failures the task-specific models consistently exhibit. We apply SynthEval to two classification tasks and show that our framework is effective in identifying weaknesses of strong models on these tasks.
Anthology ID:
2024.findings-emnlp.412
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7017–7034
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.412
DOI:
10.18653/v1/2024.findings-emnlp.412
Bibkey:
Cite (ACL):
Raoyuan Zhao, Abdullatif Köksal, Yihong Liu, Leonie Weissweiler, Anna Korhonen, and Hinrich Schuetze. 2024. SynthEval: Hybrid Behavioral Testing of NLP Models with Synthetic Evaluation. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7017–7034, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
SynthEval: Hybrid Behavioral Testing of NLP Models with Synthetic Evaluation (Zhao et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.412.pdf
Software:
 2024.findings-emnlp.412.software.zip
Data:
 2024.findings-emnlp.412.data.zip