Chunxiao Zhou
2024
Estimating Agreement by Chance for Sequence Annotation
Diya Li
|
Carolyn Rose
|
Ao Yuan
|
Chunxiao Zhou
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In the field of natural language processing, correction of performance assessment for chance agreement plays a crucial role in evaluating the reliability of annotations. However, there is a notable dearth of research focusing on chance correction for assessing the reliability of sequence annotation tasks, despite their widespread prevalence in the field. To address this gap, this paper introduces a novel model for generating random annotations, which serves as the foundation for estimating chance agreement in sequence annotation tasks. Utilizing the proposed randomization model and a related comparison approach, we successfully derive the analytical form of the distribution, enabling the computation of the probable location of each annotated text segment and subsequent chance agreement estimation. Through a combination simulation and corpus-based evaluation, we successfully assess its applicability and validate its accuracy and efficacy.
2022
QA4IE: A Quality Assurance Tool for Information Extraction
Rafael Jimenez Silva
|
Kaushik Gedela
|
Alex Marr
|
Bart Desmet
|
Carolyn Rose
|
Chunxiao Zhou
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Quality assurance (QA) is an essential though underdeveloped part of the data annotation process. Although QA is supported to some extent in existing annotation tools, comprehensive support for QA is not standardly provided. In this paper we contribute QA4IE, a comprehensive QA tool for information extraction, which can (1) detect potential problems in text annotations in a timely manner, (2) accurately assess the quality of annotations, (3) visually display and summarize annotation discrepancies among annotation team members, (4) provide a comprehensive statistics report, and (5) support viewing of annotated documents interactively. This paper offers a competitive analysis comparing QA4IE and other popular annotation tools and demonstrates its features, usage, and effectiveness through a case study. The Python code, documentation, and demonstration video are available publicly at https://github.com/CC-RMD-EpiBio/QA4IE.
Search
Co-authors
- Carolyn Rose 2
- Rafael Jimenez Silva 1
- Kaushik Gedela 1
- Alex Marr 1
- Bart Desmet 1
- show all...