Learning under Label Proportions for Text Classification

Jatin Chauhan, Xiaoxuan Wang, Wei Wang


Abstract
We present one of the preliminary NLP works under the challenging setup of Learning from Label Proportions (LLP), where the data is provided in an aggregate form called bags and only the proportion of samples in each class as the ground truth. This setup is inline with the desired characteristics of training models under Privacy settings and Weakly supervision. By characterizing some irregularities of the most widely used baseline technique DLLP, we propose a novel formulation that is also robust. This is accompanied with a learnability result that provides a generalization bound under LLP. Combining this formulation with a self-supervised objective, our method achieves better results as compared to the baselines in almost 87% of the experimental configurations which include large scale models for both long and short range texts across multiple metrics.
Anthology ID:
2023.findings-emnlp.817
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12210–12223
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.817
DOI:
10.18653/v1/2023.findings-emnlp.817
Bibkey:
Cite (ACL):
Jatin Chauhan, Xiaoxuan Wang, and Wei Wang. 2023. Learning under Label Proportions for Text Classification. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12210–12223, Singapore. Association for Computational Linguistics.
Cite (Informal):
Learning under Label Proportions for Text Classification (Chauhan et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.817.pdf