Enhancing Multiple-choice Machine Reading Comprehension by Punishing Illogical Interpretations

Yiming Ju, Yuanzhe Zhang, Zhixing Tian, Kang Liu, Xiaohuan Cao, Wenting Zhao, Jinlong Li, Jun Zhao


Abstract
Machine Reading Comprehension (MRC), which requires a machine to answer questions given the relevant documents, is an important way to test machines’ ability to understand human language. Multiple-choice MRC is one of the most studied tasks in MRC due to the convenience of evaluation and the flexibility of answer format. Post-hoc interpretation aims to explain a trained model and reveal how the model arrives at the prediction. One of the most important interpretation forms is to attribute model decisions to input features. Based on post-hoc interpretation methods, we assess attributions of paragraphs in multiple-choice MRC and improve the model by punishing the illogical attributions. Our method can improve model performance without any external information and model structure change. Furthermore, we also analyze how and why such a self-training method works.
Anthology ID:
2021.emnlp-main.295
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3641–3652
Language:
URL:
https://aclanthology.org/2021.emnlp-main.295
DOI:
10.18653/v1/2021.emnlp-main.295
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.295.pdf
Data
DREAMMultiRCRACESuperGLUE