Qiming Ge
2024
Navigating the OverKill in Large Language Models
Chenyu Shi
|
Xiao Wang
|
Qiming Ge
|
Songyang Gao
|
Xianjun Yang
|
Tao Gui
|
Qi Zhang
|
Xuanjing Huang
|
Xun Zhao
|
Dahua Lin
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models are meticulously aligned to be both helpful and harmless. However, recent research points to a potential overkill which means models may refuse to answer benign queries. In this paper, we investigate the factors for overkill by exploring how models handle and determine the safety of queries. Our findings reveal the presence of shortcuts within models, leading to excessive attention to harmful words like ‘kill’ and prompts emphasizing safety will exacerbate overkill. Based on these insights, we introduce Self-Contrastive Decoding (Self-CD), a training-free and model-agnostic strategy, to alleviate this phenomenon. We first extract such excessive attention by amplifying the difference in the model’s output distributions when responding to system prompts that either include or omit an emphasis on safety. Then we determine the final next-token predictions by downplaying the excessive attention via contrastive decoding. Empirical results have indicated that our method has achieved an average reduction of the refusal rate by 20 % while having almost no impact on safety.
2023
Orthogonal Subspace Learning for Language Model Continual Learning
Xiao Wang
|
Tianze Chen
|
Qiming Ge
|
Han Xia
|
Rong Bao
|
Rui Zheng
|
Qi Zhang
|
Tao Gui
|
Xuanjing Huang
Findings of the Association for Computational Linguistics: EMNLP 2023
Benefiting from massive corpora and advanced hardware, large language models (LLMs) exhibit remarkable capabilities in language understanding and generation. However, their performance degrades in scenarios where multiple tasks are encountered sequentially, also known as catastrophic forgetting. In this paper, we propose orthogonal low-rank adaptation (O-LoRA), a simple and efficient approach for continual learning in language models, effectively mitigating catastrophic forgetting while learning new tasks. Specifically, O-LoRA learns tasks in different (low-rank) vector subspaces that are kept orthogonal to each other in order to minimize interference. Our method induces only marginal additional parameter costs and requires no user data storage for replay. Experimental results on continual learning benchmarks show that our method outperforms state-of-the-art methods. Furthermore, compared to previous approaches, our method excels in preserving the generalization ability of LLMs on unseen tasks.
Search
Co-authors
- Xiao Wang 2
- Qi Zhang 2
- Tao Gui 2
- Xuan-Jing Huang 2
- Tianze Chen 1
- show all...