Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach

Simiao Zuo, Chen Liang, Haoming Jiang, Xiaodong Liu, Pengcheng He, Jianfeng Gao, Weizhu Chen, Tuo Zhao


Abstract
Adversarial regularization has been shown to improve the generalization performance of deep learning models in various natural language processing tasks. Existing works usually formulate the method as a zero-sum game, which is solved by alternating gradient descent/ascent algorithms. Such a formulation treats the adversarial and the defending players equally, which is undesirable because only the defending player contributes to the generalization performance. To address this issue, we propose Stackelberg Adversarial Regularization (SALT), which formulates adversarial regularization as a Stackelberg game. This formulation induces a competition between a leader and a follower, where the follower generates perturbations, and the leader trains the model subject to the perturbations. Different from conventional approaches, in SALT, the leader is in an advantageous position. When the leader moves, it recognizes the strategy of the follower and takes the anticipated follower’s outcomes into consideration. Such a leader’s advantage enables us to improve the model fitting to the unperturbed data. The leader’s strategic information is captured by the Stackelberg gradient, which is obtained using an unrolling algorithm. Our experimental results on a set of machine translation and natural language understanding tasks show that SALT outperforms existing adversarial regularization baselines across all tasks. Our code is publicly available.
Anthology ID:
2021.emnlp-main.527
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6562–6577
Language:
URL:
https://aclanthology.org/2021.emnlp-main.527
DOI:
10.18653/v1/2021.emnlp-main.527
Bibkey:
Cite (ACL):
Simiao Zuo, Chen Liang, Haoming Jiang, Xiaodong Liu, Pengcheng He, Jianfeng Gao, Weizhu Chen, and Tuo Zhao. 2021. Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6562–6577, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach (Zuo et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.527.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.527.mp4
Code
 SimiaoZuo/Stackelberg-Adv
Data
CoLAGLUEMRPCQNLISSTSST-2