Self-training with Few-shot Rationalization

Meghana Moorthy Bhat, Alessandro Sordoni, Subhabrata Mukherjee


Abstract
While pre-trained language models have obtained state-of-the-art performance for several natural language understanding tasks, they are quite opaque in terms of their decision-making process. While some recent works focus on rationalizing neural predictions by highlighting salient concepts in the text as justifications or rationales, they rely on thousands of labeled training examples for both task labels as well as annotated rationales for every instance. Such extensive large-scale annotations are infeasible to obtain for many tasks. To this end, we develop a multi-task teacher-student framework based on self-training pre-trained language models with limited task-specific labels and rationales and judicious sample selection to learn from informative pseudo-labeled examples. We study several characteristics of what constitutes a good rationale and demonstrate that the neural model performance can be significantly improved by making it aware of its rationalized predictions, particularly in low-resource settings. Extensive experiments in several benchmark datasets demonstrate the effectiveness of our approach.
Anthology ID:
2021.emnlp-main.836
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10702–10712
Language:
URL:
https://aclanthology.org/2021.emnlp-main.836
DOI:
10.18653/v1/2021.emnlp-main.836
Bibkey:
Cite (ACL):
Meghana Moorthy Bhat, Alessandro Sordoni, and Subhabrata Mukherjee. 2021. Self-training with Few-shot Rationalization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10702–10712, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Self-training with Few-shot Rationalization (Bhat et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.836.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.836.mp4
Data
BoolQFEVERe-SNLI