Defending Pre-trained Language Models from Adversarial Word Substitution Without Performance Sacrifice

Rongzhou Bao, Jiayi Wang, Hai Zhao


Anthology ID:
2021.findings-acl.287
Volume:
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3248–3258
Language:
URL:
https://aclanthology.org/2021.findings-acl.287
DOI:
10.18653/v1/2021.findings-acl.287
Bibkey:
Cite (ACL):
Rongzhou Bao, Jiayi Wang, and Hai Zhao. 2021. Defending Pre-trained Language Models from Adversarial Word Substitution Without Performance Sacrifice. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3248–3258, Online. Association for Computational Linguistics.
Cite (Informal):
Defending Pre-trained Language Models from Adversarial Word Substitution Without Performance Sacrifice (Bao et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-acl.287.pdf
Data
IMDb Movie ReviewsSSTSST-2