Abhishek Ghose


2024

pdf bib
On the Fragility of Active Learners for Text Classification
Abhishek Ghose | Emma Nguyen
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Active learning (AL) techniques optimally utilize a labeling budget by iteratively selecting instances that are most valuable for learning. However, they lack “prerequisite checks”, i.e., there are no prescribed criteria to pick an AL algorithm best suited for a dataset. A practitioner must pick a technique they trust would beat random sampling, based on prior reported results, and hope that it is resilient to the many variables in their environment: dataset, labeling budget and prediction pipelines. The important questions then are: how often on average, do we expect any AL technique to reliably beat the computationally cheap and easy-to-implement strategy of random sampling? Does it at least make sense to use AL in an “Always ON” mode in a prediction pipeline, so that while it might not always help, it never under-performs random sampling? How much of a role does the prediction pipeline play in AL’s success?We examine these questions in detail for the task of text classification using pre-trained representations, which are ubiquitous today.Our primary contribution here is a rigorous evaluation of AL techniques, old and new, across setups that vary wrt datasets, text representations and classifiers. This unlocks multiple insights around warm-up times, i.e., number of labels before gains from AL are seen, viability of an “Always ON” mode and the relative significance of different factors.Additionally, we release a framework for rigorous benchmarking of AL techniques for text classification.
Search
Co-authors
Venues