Security Challenges in Natural Language Processing Models

Qiongkai Xu, Xuanli He


Abstract
Large-scale natural language processing models have been developed and integrated into numerous applications, given the advantage of their remarkable performance. Nonetheless, the security concerns associated with these models prevent the widespread adoption of these black-box machine learning models. In this tutorial, we will dive into three emerging security issues in NLP research, i.e., backdoor attacks, private data leakage, and imitation attacks. These threats will be introduced in accordance with their threatening usage scenarios, attack methodologies, and defense technologies.
Anthology ID:
2023.emnlp-tutorial.2
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Month:
December
Year:
2023
Address:
Singapore
Editors:
Qi Zhang, Hassan Sajjad
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7–12
Language:
URL:
https://aclanthology.org/2023.emnlp-tutorial.2
DOI:
10.18653/v1/2023.emnlp-tutorial.2
Bibkey:
Cite (ACL):
Qiongkai Xu and Xuanli He. 2023. Security Challenges in Natural Language Processing Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, pages 7–12, Singapore. Association for Computational Linguistics.
Cite (Informal):
Security Challenges in Natural Language Processing Models (Xu & He, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-tutorial.2.pdf