Xun Zhao
2024
Navigating the OverKill in Large Language Models
Chenyu Shi
|
Xiao Wang
|
Qiming Ge
|
Songyang Gao
|
Xianjun Yang
|
Tao Gui
|
Qi Zhang
|
Xuanjing Huang
|
Xun Zhao
|
Dahua Lin
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models are meticulously aligned to be both helpful and harmless. However, recent research points to a potential overkill which means models may refuse to answer benign queries. In this paper, we investigate the factors for overkill by exploring how models handle and determine the safety of queries. Our findings reveal the presence of shortcuts within models, leading to excessive attention to harmful words like ‘kill’ and prompts emphasizing safety will exacerbate overkill. Based on these insights, we introduce Self-Contrastive Decoding (Self-CD), a training-free and model-agnostic strategy, to alleviate this phenomenon. We first extract such excessive attention by amplifying the difference in the model’s output distributions when responding to system prompts that either include or omit an emphasis on safety. Then we determine the final next-token predictions by downplaying the excessive attention via contrastive decoding. Empirical results have indicated that our method has achieved an average reduction of the refusal rate by 20 % while having almost no impact on safety.
Search
Co-authors
- Chenyu Shi 1
- Xiao Wang 1
- Qiming Ge 1
- Songyang Gao 1
- Xianjun Yang 1
- show all...
Venues
- acl1