Open-Set Semi-Supervised Text Classification via Adversarial Disagreement Maximization

Junfan Chen, Richong Zhang, Junchi Chen, Chunming Hu


Abstract
Open-Set Semi-Supervised Text Classification (OSTC) aims to train a classification model on a limited set of labeled texts, alongside plenty of unlabeled texts that include both in-distribution and out-of-distribution examples. In this paper, we revisit the main challenge in OSTC, i.e., outlier detection, from a measurement disagreement perspective and innovatively propose to improve OSTC performance by directly maximizing the measurement disagreements. Based on the properties of in-measurement and cross-measurements, we design an Adversarial Disagreement Maximization (ADM) model that synergeticly optimizes the measurement disagreements. In addition, we develop an abnormal example detection and measurement calibration approach to guarantee the effectiveness of ADM training. Experiment results and comprehensive analysis of three benchmarks demonstrate the effectiveness of our model.
Anthology ID:
2024.acl-long.118
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2170–2180
Language:
URL:
https://aclanthology.org/2024.acl-long.118
DOI:
Bibkey:
Cite (ACL):
Junfan Chen, Richong Zhang, Junchi Chen, and Chunming Hu. 2024. Open-Set Semi-Supervised Text Classification via Adversarial Disagreement Maximization. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2170–2180, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Open-Set Semi-Supervised Text Classification via Adversarial Disagreement Maximization (Chen et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.118.pdf