Few Shot Rationale Generation using Self-Training with Dual Teachers

Aditya Srikanth Veerubhotla, Lahari Poddar, Jun Yin, György Szarvas, Sharanya Eswaran


Abstract
Self-rationalizing models that also generate a free-text explanation for their predicted labels are an important tool to build trustworthy AI applications. Since generating explanations for annotated labels is a laborious and costly process, recent models rely on large pretrained language models (PLMs) as their backbone and few-shot learning. In this work we explore a self-training approach leveraging both labeled and unlabeled data to further improve few-shot models, under the assumption that neither human written rationales nor annotated task labels are available at scale. We introduce a novel dual-teacher learning framework, which learns two specialized teacher models for task prediction and rationalization using self-training and distills their knowledge into a multi-tasking student model that can jointly generate the task label and rationale. Furthermore, we formulate a new loss function, Masked Label Regularization(MLR) which promotes explanations to be strongly conditioned on predicted labels. Evaluation on three public datasets demonstrate that the proposed methods are effective in modeling task labels and generating faithful rationales.
Anthology ID:
2023.findings-acl.297
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4825–4838
Language:
URL:
https://aclanthology.org/2023.findings-acl.297
DOI:
10.18653/v1/2023.findings-acl.297
Bibkey:
Cite (ACL):
Aditya Srikanth Veerubhotla, Lahari Poddar, Jun Yin, György Szarvas, and Sharanya Eswaran. 2023. Few Shot Rationale Generation using Self-Training with Dual Teachers. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4825–4838, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Few Shot Rationale Generation using Self-Training with Dual Teachers (Veerubhotla et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.297.pdf