Targeted Adversarial Training for Natural Language Understanding

Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, Ichiro Kobayashi


Abstract
We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding. The key idea is to introspect current mistakes and prioritize adversarial training steps to where the model errs the most. Experiments show that TAT can significantly improve accuracy over standard adversarial training on GLUE and attain new state-of-the-art zero-shot results on XNLI. Our code will be released upon acceptance of the paper.
Anthology ID:
2021.naacl-main.424
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5385–5393
Language:
URL:
https://aclanthology.org/2021.naacl-main.424
DOI:
10.18653/v1/2021.naacl-main.424
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.424.pdf