Learning Invariant Representation Improves Robustness for MRC Models

Yu Hai, Liang Wen, Haoran Meng, Tianyu Liu, Houfeng Wang


Abstract
The prosperity of Pretrained Language Models(PLM) has greatly promoted the development of Machine Reading Comprehension (MRC). However, these models are vulnerable and not robust to adversarial examples. In this paper, we propose Stable and Contrastive Question Answering (SCQA) to improve invariance of representation to alleviate these robustness issues. Specifically, we first construct positive example pairs which have same answer through data augmentation. Then SCQA learns enhanced representations with better alignment between positive pairs by introducing stability and contrastive loss. Experimental results show that our approach can boost the robustness of QA models cross different MRC tasks and attack sets significantly and consistently.
Anthology ID:
2022.findings-emnlp.241
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3306–3314
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.241
DOI:
10.18653/v1/2022.findings-emnlp.241
Bibkey:
Cite (ACL):
Yu Hai, Liang Wen, Haoran Meng, Tianyu Liu, and Houfeng Wang. 2022. Learning Invariant Representation Improves Robustness for MRC Models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3306–3314, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Learning Invariant Representation Improves Robustness for MRC Models (Hai et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.241.pdf