Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models

Chaitanya Malaviya, Sudeep Bhatia, Mark Yatskar


Abstract
Cognitive psychologists have documented that humans use cognitive heuristics, or mental shortcuts, to make quick decisions while expending less effort. While performing annotation work on crowdsourcing platforms, we hypothesize that such heuristic use among annotators cascades on to data quality and model robustness. In this work, we study cognitive heuristic use in the context of annotating multiple-choice reading comprehension datasets. We propose tracking annotator heuristic traces, where we tangibly measure low-effort annotation strategies that could indicate usage of various cognitive heuristics. We find evidence that annotators might be using multiple such heuristics, based on correlations with a battery of psychological tests. Importantly, heuristic use among annotators determines data quality along several dimensions: (1) known biased models, such as partial input models, more easily solve examples authoredby annotators that rate highly on heuristic use, (2) models trained on annotators scoring highly on heuristic use don’t generalize as well, and (3) heuristic-seeking annotators tend to create qualitatively less challenging examples. Our findings suggest that tracking heuristic usage among annotators can potentially help with collecting challenging datasets and diagnosing model biases.
Anthology ID:
2022.emnlp-main.438
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6525–6540
Language:
URL:
https://aclanthology.org/2022.emnlp-main.438
DOI:
10.18653/v1/2022.emnlp-main.438
Bibkey:
Cite (ACL):
Chaitanya Malaviya, Sudeep Bhatia, and Mark Yatskar. 2022. Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6525–6540, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models (Malaviya et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.438.pdf