Zero-Shot Rationalization by Multi-Task Transfer Learning from Question Answering

Po-Nien Kung, Tse-Hsuan Yang, Yi-Cheng Chen, Sheng-Siang Yin, Yun-Nung Chen


Abstract
Extracting rationales can help human understand which information the model utilizes and how it makes the prediction towards better interpretability. However, annotating rationales requires much effort and only few datasets contain such labeled rationales, making supervised learning for rationalization difficult. In this paper, we propose a novel approach that leverages the benefits of both multi-task learning and transfer learning for generating rationales through question answering in a zero-shot fashion. For two benchmark rationalization datasets, the proposed method achieves comparable or even better performance of rationalization without any supervised signal, demonstrating the great potential of zero-shot rationalization for better interpretability.
Anthology ID:
2020.findings-emnlp.198
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2187–2197
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.198
DOI:
10.18653/v1/2020.findings-emnlp.198
Bibkey:
Cite (ACL):
Po-Nien Kung, Tse-Hsuan Yang, Yi-Cheng Chen, Sheng-Siang Yin, and Yun-Nung Chen. 2020. Zero-Shot Rationalization by Multi-Task Transfer Learning from Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2187–2197, Online. Association for Computational Linguistics.
Cite (Informal):
Zero-Shot Rationalization by Multi-Task Transfer Learning from Question Answering (Kung et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.198.pdf
Code
 miulab/zeroshotrationale
Data
IMDb Movie ReviewsSQuAD