Ranking-Constrained Learning with Rationales for Text Classification

Juanyan Wang, Manali Sharma, Mustafa Bilgic


Abstract
We propose a novel approach that jointly utilizes the labels and elicited rationales for text classification to speed up the training of deep learning models with limited training data. We define and optimize a ranking-constrained loss function that combines cross-entropy loss with ranking losses as rationale constraints. We evaluate our proposed rationale-augmented learning approach on three human-annotated datasets, and show that our approach provides significant improvements over classification approaches that do not utilize rationales as well as other state-of-the-art rationale-augmented baselines.
Anthology ID:
2022.findings-acl.161
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2034–2046
Language:
URL:
https://aclanthology.org/2022.findings-acl.161
DOI:
10.18653/v1/2022.findings-acl.161
Bibkey:
Cite (ACL):
Juanyan Wang, Manali Sharma, and Mustafa Bilgic. 2022. Ranking-Constrained Learning with Rationales for Text Classification. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2034–2046, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Ranking-Constrained Learning with Rationales for Text Classification (Wang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.161.pdf