Tools Impact on the Quality of Annotations for Chat Untangling

Jhonny Cerezo, Felipe Bravo-Marquez, Alexandre Henri Bergel


Abstract
The quality of the annotated data directly influences in the success of supervised NLP models. However, creating annotated datasets is often time-consuming and expensive. Although the annotation tool takes an important role, we know little about how it influences annotation quality. We compare the quality of annotations for the task of chat-untangling made by non-experts annotators using two different tools. The first is SLATE, an existing command-line based tool, and the second is Parlay, a new tool we developed that integrates mouse interaction and visual links. Our experimental results indicate that, while both tools perform similarly in terms of annotation quality, Parlay offers a significantly better user experience.
Anthology ID:
2021.acl-srw.22
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop
Month:
August
Year:
2021
Address:
Online
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
215–220
Language:
URL:
https://aclanthology.org/2021.acl-srw.22
DOI:
10.18653/v1/2021.acl-srw.22
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.acl-srw.22.pdf