Don’t Blame the Annotator: Bias Already Starts in the Annotation Instructions

Mihir Parmar, Swaroop Mishra, Mor Geva, Chitta Baral


Abstract
In recent years, progress in NLU has been driven by benchmarks. These benchmarks are typically collected by crowdsourcing, where annotators write examples based on annotation instructions crafted by dataset creators. In this work, we hypothesize that annotators pick up on patterns in the crowdsourcing instructions, which bias them to write many similar examples that are then over-represented in the collected data. We study this form of bias, termed instruction bias, in 14 recent NLU benchmarks, showing that instruction examples often exhibit concrete patterns, which are propagated by crowdworkers to the collected data. This extends previous work (Geva et al., 2019) and raises a new concern of whether we are modeling the dataset creator’s instructions, rather than the task. Through a series of experiments, we show that, indeed, instruction bias can lead to overestimation of model performance, and that models struggle to generalize beyond biases originating in the crowdsourcing instructions. We further analyze the influence of instruction bias in terms of pattern frequency and model size, and derive concrete recommendations for creating future NLU benchmarks.
Anthology ID:
2023.eacl-main.130
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1779–1789
Language:
URL:
https://aclanthology.org/2023.eacl-main.130
DOI:
10.18653/v1/2023.eacl-main.130
Award:
 EACL Outstanding Paper
Bibkey:
Cite (ACL):
Mihir Parmar, Swaroop Mishra, Mor Geva, and Chitta Baral. 2023. Don’t Blame the Annotator: Bias Already Starts in the Annotation Instructions. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1779–1789, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Don’t Blame the Annotator: Bias Already Starts in the Annotation Instructions (Parmar et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.130.pdf
Video:
 https://aclanthology.org/2023.eacl-main.130.mp4