RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Linguistic Classifiers

David Farr, Nico Manzonelli, Iain Cruickshank, Jevin West


Abstract
Large language models (LLMs) have enhanced our ability to rapidly analyze and classify unstructured natural language data. However, concerns regarding cost, network limitations, and security constraints have posed challenges for their integration into industry processes. In this study, we adopt a systems design approach to employing LLMs as imperfect data annotators for downstream supervised learning tasks, introducing system intervention measures aimed at improving classification performance. Our methodology outperforms LLM-generated labels in six of eight tests and base classifiers in all tests, demonstrating an effective strategy for incorporating LLMs into the design and deployment of specialized, supervised learning models present in many industry use cases.
Anthology ID:
2025.coling-industry.5
Volume:
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert, Kareem Darwish, Apoorv Agarwal
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
58–67
Language:
URL:
https://aclanthology.org/2025.coling-industry.5/
DOI:
Bibkey:
Cite (ACL):
David Farr, Nico Manzonelli, Iain Cruickshank, and Jevin West. 2025. RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Linguistic Classifiers. In Proceedings of the 31st International Conference on Computational Linguistics: Industry Track, pages 58–67, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Linguistic Classifiers (Farr et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-industry.5.pdf