Self-Training for Sample-Efficient Active Learning for Text Classification with Pre-Trained Language Models

Christopher Schröder, Gerhard Heyer


Abstract
Active learning is an iterative labeling process that is used to obtain a small labeled subset, despite the absence of labeled data, thereby enabling to train a model for supervised tasks such as text classification.While active learning has made considerable progress in recent years due to improvements provided by pre-trained language models, there is untapped potential in the often neglected unlabeled portion of the data, although it is available in considerably larger quantities than the usually small set of labeled data. In this work, we investigate how self-training, a semi-supervised approach that uses a model to obtain pseudo-labels for unlabeled data, can be used to improve the efficiency of active learning for text classification. Building on a comprehensive reproduction of four previous self-training approaches, some of which are evaluated for the first time in the context of active learning or natural language processing, we introduce HAST, a new and effective self-training strategy, which is evaluated on four text classification benchmarks. Our results show that it outperforms the reproduced self-training approaches and reaches classification results comparable to previous experiments for three out of four datasets, using as little as 25% of the data. The code is publicly available at https://github.com/chschroeder/self-training-for-sample-efficient-active-learning.
Anthology ID:
2024.emnlp-main.669
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11987–12004
Language:
URL:
https://aclanthology.org/2024.emnlp-main.669
DOI:
Bibkey:
Cite (ACL):
Christopher Schröder and Gerhard Heyer. 2024. Self-Training for Sample-Efficient Active Learning for Text Classification with Pre-Trained Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 11987–12004, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Self-Training for Sample-Efficient Active Learning for Text Classification with Pre-Trained Language Models (Schröder & Heyer, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.669.pdf