SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization

Kohei Tsuji, Tatsuya Hiraoka, Yuchang Cheng, Tomoya Iwakura


Abstract
NLP datasets may still contain annotation errors, even when they are manually annotated. Researchers have attempted to develop methods to automatically reduce the adverse effect of errors in datasets. However, existing methods are time-consuming because they require many trained models to detect errors. This paper proposes a time-saving method that utilizes a tokenization technique called subword regularization to simulate multiple error detection models for detecting errors. Our proposed method, SubRegWeigh, can perform annotation weighting four to five times faster than the existing method. Additionally, SubRegWeigh improved performance in document classification and named entity recognition tasks. In experiments with pseudo-incorrect labels, SubRegWeigh clearly identifies pseudo-incorrect labels as annotation errors. Our code is available at https://github.com/4ldk/SubRegWeigh.
Anthology ID:
2025.coling-main.130
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1908–1921
Language:
URL:
https://aclanthology.org/2025.coling-main.130/
DOI:
Bibkey:
Cite (ACL):
Kohei Tsuji, Tatsuya Hiraoka, Yuchang Cheng, and Tomoya Iwakura. 2025. SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1908–1921, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization (Tsuji et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.130.pdf