Qifei Zhou
2020
Enhancing Neural Models with Vulnerability via Adversarial Attack
Rong Zhang
|
Qifei Zhou
|
Bo An
|
Weiping Li
|
Tong Mo
|
Bo Wu
Proceedings of the 28th International Conference on Computational Linguistics
Natural Language Sentence Matching (NLSM) serves as the core of many natural language processing tasks. 1) Most previous work develops a single specific neural model for NLSM tasks. 2) There is no previous work considering adversarial attack to improve the performance of NLSM tasks. 3) Adversarial attack is usually used to generate adversarial samples that can fool neural models. In this paper, we first find a phenomenon that different categories of samples have different vulnerabilities. Vulnerability is the difficulty degree in changing the label of a sample. Considering the phenomenon, we propose a general two-stage training framework to enhance neural models with Vulnerability via Adversarial Attack (VAA). We design criteria to measure the vulnerability which is obtained by adversarial attack. VAA framework can be adapted to various neural models by incorporating the vulnerability. In addition, we prove a theorem and four corollaries to explain the factors influencing vulnerability effectiveness. Experimental results show that VAA significantly improves the performance of neural models on NLSM datasets. The results are also consistent with the theorem and corollaries. The code is released on https://github.com/rzhangpku/VAA.
Search
Co-authors
- Rong Zhang 1
- Bo An 1
- Weiping Li 1
- Tong Mo 1
- Bo Wu 1