How Much Consistency Is Your Accuracy Worth?

Jacob K. Johnson, Ana Marasović


Abstract
Contrast set consistency is a robustness measurement that evaluates the rate at which a model correctly responds to all instances in a bundle of minimally different examples relying on the same knowledge. To draw additional insights, we propose to complement consistency with relative consistency—the probability that an equally accurate model would surpass the consistency of the proposed model, given a distribution over possible consistencies. Models with 100% relative consistency have reached a consistency peak for their accuracy. We reflect on prior work that reports consistency in contrast sets and observe that relative consistency can alter the assessment of a model’s consistency compared to another. We anticipate that our proposed measurement and insights will influence future studies aiming to promote consistent behavior in models.
Anthology ID:
2023.blackboxnlp-1.19
Volume:
Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
December
Year:
2023
Address:
Singapore
Editors:
Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
250–260
Language:
URL:
https://aclanthology.org/2023.blackboxnlp-1.19
DOI:
10.18653/v1/2023.blackboxnlp-1.19
Bibkey:
Cite (ACL):
Jacob K. Johnson and Ana Marasović. 2023. How Much Consistency Is Your Accuracy Worth?. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 250–260, Singapore. Association for Computational Linguistics.
Cite (Informal):
How Much Consistency Is Your Accuracy Worth? (Johnson & Marasović, BlackboxNLP-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.blackboxnlp-1.19.pdf