Exploring Backdoor Vulnerabilities of Chat Models

Wenkai Yang, Yunzhuo Hao, Yankai Lin


Abstract
Recent researches have shown that Large Language Models (LLMs) are susceptible to a security threat known as Backdoor Attack. The backdoored model will behave well in normal cases but exhibit malicious behaviours on inputs inserted with a specific backdoor trigger. Current backdoor studies on LLMs predominantly focus on single-turn instruction-tuned LLMs, while neglecting another realistic scenario where LLMs are fine-tuned on multi-turn conversational data to be chat models. Chat models are extensively adopted across various real-world scenarios, thus the security of chat models deserves increasing attention. Unfortunately, we point out that the flexible multi-turn interaction format instead increases the flexibility of trigger designs and amplifies the vulnerability of chat models to backdoor attacks. In this work, we reveal and achieve a novel backdoor attacking method on chat models by distributing multiple trigger scenarios across user inputs in different rounds, and making the backdoor be triggered only when all trigger scenarios have appeared in the historical conversations. Experimental results demonstrate that our method can achieve high attack success rates (e.g., over 90% ASR on Vicuna-7B) while successfully maintaining the normal capabilities of chat models on providing helpful responses to benign user requests. Also, the backdoor cannot be easily removed by the downstream re-alignment, highlighting the importance of continued research and attention to the security concerns of chat models. Warning: This paper may contain toxic examples.
Anthology ID:
2025.coling-main.62
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
933–946
Language:
URL:
https://aclanthology.org/2025.coling-main.62/
DOI:
Bibkey:
Cite (ACL):
Wenkai Yang, Yunzhuo Hao, and Yankai Lin. 2025. Exploring Backdoor Vulnerabilities of Chat Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 933–946, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Exploring Backdoor Vulnerabilities of Chat Models (Yang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.62.pdf