Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing

Wei Zhao, Zhe Li, Yige Li, Ye Zhang, Jun Sun


Abstract
Large language models (LLMs) are increasingly being adopted in a wide range of real-world applications. Despite their impressive performance, recent studies have shown that LLMs are vulnerable to deliberately crafted adversarial prompts even when aligned via Reinforcement Learning from Human Feedback or supervised fine-tuning. While existing defense methods focus on either detecting harmful prompts or reducing the likelihood of harmful responses through various means, defending LLMs against jailbreak attacks based on the inner mechanisms of LLMs remains largely unexplored. In this work, we investigate how LLMs respond to harmful prompts and propose a novel defense method termed Layer-specific Editing (LED) to enhance the resilience of LLMs against jailbreak attacks. Through LED, we reveal that several critical safety layers exist among the early layers of LLMs. We then show that realigning these safety layers (and some selected additional layers) with the decoded safe response from identified toxic layers can significantly improve the alignment of LLMs against jailbreak attacks. Extensive experiments across various LLMs (e.g., Llama2, Mistral) show the effectiveness of LED, which effectively defends against jailbreak attacks while maintaining performance on benign prompts. Our code is available at https://github.com/ledllm/ledllm.
Anthology ID:
2024.findings-emnlp.293
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5094–5109
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.293
DOI:
Bibkey:
Cite (ACL):
Wei Zhao, Zhe Li, Yige Li, Ye Zhang, and Jun Sun. 2024. Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 5094–5109, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing (Zhao et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.293.pdf