Zijun Yuan
2023
Common Law Annotations: Investigating the Stability of Dialog System Output Annotations
Seunggun Lee
|
Alexandra DeLucia
|
Nikita Nangia
|
Praneeth Ganedi
|
Ryan Guan
|
Rubing Li
|
Britney Ngaw
|
Aditya Singhal
|
Shalaka Vaidya
|
Zijun Yuan
|
Lining Zhang
|
João Sedoc
Findings of the Association for Computational Linguistics: ACL 2023
Metrics for Inter-Annotator Agreement (IAA), like Cohen’s Kappa, are crucial for validating annotated datasets. Although high agreement is often used to show the reliability of annotation procedures, it is insufficient to ensure or reproducibility. While researchers are encouraged to increase annotator agreement, this can lead to specific and tailored annotation guidelines. We hypothesize that this may result in diverging annotations from different groups. To study this, we first propose the Lee et al. Protocol (LEAP), a standardized and codified annotation protocol. LEAP strictly enforces transparency in the annotation process, which ensures reproducibility of annotation guidelines. Using LEAP to annotate a dialog dataset, we empirically show that while research groups may create reliable guidelines by raising agreement, this can cause divergent annotations across different research groups, thus questioning the validity of the annotations. Therefore, we caution NLP researchers against using reliability as a proxy for reproducibility and validity.
Search
Co-authors
- Aditya Singhal 1
- Alexandra Delucia 1
- Britney Ngaw 1
- João Sedoc 1
- Lining Zhang 1
- show all...