BECEL: Benchmark for Consistency Evaluation of Language Models

Myeongjun Jang, Deuk Sin Kwon, Thomas Lukasiewicz


Abstract
Behavioural consistency is a critical condition for a language model (LM) to become trustworthy like humans. Despite its importance, however, there is little consensus on the definition of LM consistency, resulting in different definitions across many studies. In this paper, we first propose the idea of LM consistency based on behavioural consistency and establish a taxonomy that classifies previously studied consistencies into several sub-categories. Next, we create a new benchmark that allows us to evaluate a model on 19 test cases, distinguished by multiple types of consistency and diverse downstream tasks. Through extensive experiments on the new benchmark, we ascertain that none of the modern pre-trained language models (PLMs) performs well in every test case, while exhibiting high inconsistency in many cases. Our experimental results suggest that a unified benchmark that covers broad aspects (i.e., multiple consistency types and tasks) is essential for a more precise evaluation.
Anthology ID:
2022.coling-1.324
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3680–3696
Language:
URL:
https://aclanthology.org/2022.coling-1.324
DOI:
Bibkey:
Cite (ACL):
Myeongjun Jang, Deuk Sin Kwon, and Thomas Lukasiewicz. 2022. BECEL: Benchmark for Consistency Evaluation of Language Models. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3680–3696, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
BECEL: Benchmark for Consistency Evaluation of Language Models (Jang et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.324.pdf
Code
 mj-jang/becel
Data
AG NewsBoolQCOCOMRPCSNLISSTWiC