What Will it Take to Fix Benchmarking in Natural Language Understanding?

Samuel R. Bowman, George Dahl


Abstract
Evaluation for many natural language understanding (NLU) tasks is broken: Unreliable and biased systems score so highly on standard benchmarks that there is little room for researchers who develop better systems to demonstrate their improvements. The recent trend to abandon IID benchmarks in favor of adversarially-constructed, out-of-distribution test sets ensures that current models will perform poorly, but ultimately only obscures the abilities that we want our benchmarks to measure. In this position paper, we lay out four criteria that we argue NLU benchmarks should meet. We argue most current benchmarks fail at these criteria, and that adversarial data collection does not meaningfully address the causes of these failures. Instead, restoring a healthy evaluation ecosystem will require significant progress in the design of benchmark datasets, the reliability with which they are annotated, their size, and the ways they handle social bias.
Anthology ID:
2021.naacl-main.385
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4843–4855
Language:
URL:
https://aclanthology.org/2021.naacl-main.385
DOI:
10.18653/v1/2021.naacl-main.385
Bibkey:
Cite (ACL):
Samuel R. Bowman and George Dahl. 2021. What Will it Take to Fix Benchmarking in Natural Language Understanding?. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4843–4855, Online. Association for Computational Linguistics.
Cite (Informal):
What Will it Take to Fix Benchmarking in Natural Language Understanding? (Bowman & Dahl, NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.385.pdf
Video:
 https://aclanthology.org/2021.naacl-main.385.mp4
Data
GLUENatural QuestionsSQuADSuperGLUE