Adversarial Training for Machine Reading Comprehension with Virtual Embeddings

Ziqing Yang, Yiming Cui, Chenglei Si, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu


Abstract
Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks. Though there are successful applications of AT on some NLP tasks, the distinguishing characteristics of NLP tasks have not been exploited. In this paper, we aim to apply AT on machine reading comprehension (MRC) tasks. Furthermore, we adapt AT for MRC tasks by proposing a novel adversarial training method called PQAT that perturbs the embedding matrix instead of word vectors. To differentiate the roles of passages and questions, PQAT uses additional virtual P/Q-embedding matrices to gather the global perturbations of words from passages and questions separately. We test the method on a wide range of MRC tasks, including span-based extractive RC and multiple-choice RC. The results show that adversarial training is effective universally, and PQAT further improves the performance.
Anthology ID:
2021.starsem-1.30
Volume:
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics
Month:
August
Year:
2021
Address:
Online
Editors:
Lun-Wei Ku, Vivi Nastase, Ivan Vulić
Venue:
*SEM
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
308–313
Language:
URL:
https://aclanthology.org/2021.starsem-1.30
DOI:
10.18653/v1/2021.starsem-1.30
Bibkey:
Cite (ACL):
Ziqing Yang, Yiming Cui, Chenglei Si, Wanxiang Che, Ting Liu, Shijin Wang, and Guoping Hu. 2021. Adversarial Training for Machine Reading Comprehension with Virtual Embeddings. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, pages 308–313, Online. Association for Computational Linguistics.
Cite (Informal):
Adversarial Training for Machine Reading Comprehension with Virtual Embeddings (Yang et al., *SEM 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.starsem-1.30.pdf
Data
HotpotQARACESQuAD