Steven N. Minton
2025
Automating Annotation Guideline Improvements using LLMs: A Case Study
Adrien Bibal
|
Nathaniel Gerlek
|
Goran Muric
|
Elizabeth Boschee
|
Steven C. Fincke
|
Mike Ross
|
Steven N. Minton
Proceedings of Context and Meaning: Navigating Disagreements in NLP Annotation
Annotating texts can be a tedious task, especially when texts are noisy. At the root of the issue, guidelines are not always optimized enough to be able to perform the required annotation task. In difficult cases, complex workflows are designed to be able to reach the best possible guidelines. However, crowdsource workers are commonly recruited to go through these complex workflows, limiting the number of iterations over the workflows, and therefore, the possible results because of the slow speed and the high cost of workers. In this paper, our case study, based on the entity recognition problem, suggests that LLMs can help produce guidelines of high quality (inter-annotator agreement going from 0.593 to 0.84 when improving WNUT-17’s guidelines), while being faster and cheaper than crowdsource workers.
Search
Fix data
Co-authors
- Adrien Bibal 1
- Elizabeth Boschee 1
- Steven C. Fincke 1
- Nathaniel Gerlek 1
- Goran Muric 1
- show all...