Estimating Agreement by Chance for Sequence Annotation

Diya Li, Carolyn Rose, Ao Yuan, Chunxiao Zhou


Abstract
In the field of natural language processing, correction of performance assessment for chance agreement plays a crucial role in evaluating the reliability of annotations. However, there is a notable dearth of research focusing on chance correction for assessing the reliability of sequence annotation tasks, despite their widespread prevalence in the field. To address this gap, this paper introduces a novel model for generating random annotations, which serves as the foundation for estimating chance agreement in sequence annotation tasks. Utilizing the proposed randomization model and a related comparison approach, we successfully derive the analytical form of the distribution, enabling the computation of the probable location of each annotated text segment and subsequent chance agreement estimation. Through a combination simulation and corpus-based evaluation, we successfully assess its applicability and validate its accuracy and efficacy.
Anthology ID:
2024.acl-long.278
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5085–5097
Language:
URL:
https://aclanthology.org/2024.acl-long.278
DOI:
10.18653/v1/2024.acl-long.278
Bibkey:
Cite (ACL):
Diya Li, Carolyn Rose, Ao Yuan, and Chunxiao Zhou. 2024. Estimating Agreement by Chance for Sequence Annotation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5085–5097, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Estimating Agreement by Chance for Sequence Annotation (Li et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.278.pdf