Leveraging Training Dynamics and Self-Training for Text Classification

Tiberiu Sosea, Cornelia Caragea


Abstract
The effectiveness of pre-trained language models in downstream tasks is highly dependent on the amount of labeled data available for training. Semi-supervised learning (SSL) is a promising technique that has seen wide attention recently due to its effectiveness in improving deep learning models when training data is scarce. Common approaches employ a teacher-student self-training framework, where a teacher network generates pseudo-labels for unlabeled data, which are then used to iteratively train a student network. In this paper, we propose a new self-training approach for text classification that leverages training dynamics of unlabeled data. We evaluate our approach on a wide range of text classification tasks, including emotion detection, sentiment analysis, question classification and gramaticality, which span a variety of domains, e.g, Reddit, Twitter, and online forums. Notably, our method is successful on all benchmarks, obtaining an average increase in F1 score of 3.5% over strong baselines in low resource settings.
Anthology ID:
2022.findings-emnlp.350
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4750–4762
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.350
DOI:
10.18653/v1/2022.findings-emnlp.350
Bibkey:
Cite (ACL):
Tiberiu Sosea and Cornelia Caragea. 2022. Leveraging Training Dynamics and Self-Training for Text Classification. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4750–4762, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Leveraging Training Dynamics and Self-Training for Text Classification (Sosea & Caragea, Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.350.pdf