k-Rater Reliability: The Correct Unit of Reliability for Aggregated Human Annotations

Ka Wong, Praveen Paritosh


Abstract
Since the inception of crowdsourcing, aggregation has been a common strategy for dealing with unreliable data. Aggregate ratings are more reliable than individual ones. However, many Natural Language Processing (NLP) applications that rely on aggregate ratings only report the reliability of individual ratings, which is the incorrect unit of analysis. In these instances, the data reliability is under-reported, and a proposed k-rater reliability (kRR) should be used as the correct data reliability for aggregated datasets. It is a multi-rater generalization of inter-rater reliability (IRR). We conducted two replications of the WordSim-353 benchmark, and present empirical, analytical, and bootstrap-based methods for computing kRR on WordSim-353. These methods produce very similar results. We hope this discussion will nudge researchers to report kRR in addition to IRR.
Anthology ID:
2022.acl-short.42
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
378–384
Language:
URL:
https://aclanthology.org/2022.acl-short.42
DOI:
10.18653/v1/2022.acl-short.42
Bibkey:
Cite (ACL):
Ka Wong and Praveen Paritosh. 2022. k-Rater Reliability: The Correct Unit of Reliability for Aggregated Human Annotations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 378–384, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
k-Rater Reliability: The Correct Unit of Reliability for Aggregated Human Annotations (Wong & Paritosh, ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-short.42.pdf