Prior Knowledge-Guided Adversarial Training

Lis Pereira, Fei Cheng, Wan Jou She, Masayuki Asahara, Ichiro Kobayashi


Abstract
We introduce a simple yet effective Prior Knowledge-Guided ADVersarial Training (PKG-ADV) algorithm to improve adversarial training for natural language understanding. Our method simply utilizes task-specific label distribution to guide the training process. By prioritizing the use of prior knowledge of labels, we aim to generate more informative adversarial perturbations. We apply our model to several challenging temporal reasoning tasks. Our method enables a more reliable and controllable data training process than relying on randomized adversarial perturbation. Albeit simple, our method achieved significant improvements in these tasks. To facilitate further research, we will release the code and models.
Anthology ID:
2024.repl4nlp-1.5
Volume:
Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Chen Zhao, Marius Mosbach, Pepa Atanasova, Seraphina Goldfarb-Tarrent, Peter Hase, Arian Hosseini, Maha Elbayad, Sandro Pezzelle, Maximilian Mozes
Venues:
RepL4NLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
51–57
Language:
URL:
https://aclanthology.org/2024.repl4nlp-1.5
DOI:
Bibkey:
Cite (ACL):
Lis Pereira, Fei Cheng, Wan Jou She, Masayuki Asahara, and Ichiro Kobayashi. 2024. Prior Knowledge-Guided Adversarial Training. In Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024), pages 51–57, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Prior Knowledge-Guided Adversarial Training (Pereira et al., RepL4NLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.repl4nlp-1.5.pdf