On the Fragility of Active Learners for Text Classification

Abhishek Ghose, Emma Nguyen


Abstract
Active learning (AL) techniques optimally utilize a labeling budget by iteratively selecting instances that are most valuable for learning. However, they lack “prerequisite checks”, i.e., there are no prescribed criteria to pick an AL algorithm best suited for a dataset. A practitioner must pick a technique they trust would beat random sampling, based on prior reported results, and hope that it is resilient to the many variables in their environment: dataset, labeling budget and prediction pipelines. The important questions then are: how often on average, do we expect any AL technique to reliably beat the computationally cheap and easy-to-implement strategy of random sampling? Does it at least make sense to use AL in an “Always ON” mode in a prediction pipeline, so that while it might not always help, it never under-performs random sampling? How much of a role does the prediction pipeline play in AL’s success?We examine these questions in detail for the task of text classification using pre-trained representations, which are ubiquitous today.Our primary contribution here is a rigorous evaluation of AL techniques, old and new, across setups that vary wrt datasets, text representations and classifiers. This unlocks multiple insights around warm-up times, i.e., number of labels before gains from AL are seen, viability of an “Always ON” mode and the relative significance of different factors.Additionally, we release a framework for rigorous benchmarking of AL techniques for text classification.
Anthology ID:
2024.emnlp-main.1240
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22217–22233
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1240
DOI:
Bibkey:
Cite (ACL):
Abhishek Ghose and Emma Nguyen. 2024. On the Fragility of Active Learners for Text Classification. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 22217–22233, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
On the Fragility of Active Learners for Text Classification (Ghose & Nguyen, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1240.pdf