Secure Your Model: An Effective Key Prompt Protection Mechanism for Large Language Models

Ruixiang Tang, Yu-Neng Chuang, Xuanting Cai, Mengnan Du, Xia Hu


Abstract
Large language models (LLMs) have notably revolutionized many domains within natural language processing due to their exceptional performance. Their security has become increasingly vital. This study is centered on protecting LLMs against unauthorized access and potential theft. We propose a simple yet effective protective measure wherein a unique key prompt is embedded within the LLM. This mechanism enables the model to respond only when presented with the correct key prompt; otherwise, LLMs will refuse to react to any input instructions. This key prompt protection offers a robust solution to prevent the unauthorized use of LLMs, as the model becomes unusable without the correct key. We evaluated the proposed protection on multiple LLMs and NLP tasks. Results demonstrate that our method can successfully protect the LLM without significantly impacting the model’s original function. Moreover, we demonstrate potential attacks that attempt to bypass the protection mechanism will adversely affect the model’s performance, further emphasizing the effectiveness of the proposed protection method.
Anthology ID:
2024.findings-naacl.256
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4061–4073
Language:
URL:
https://aclanthology.org/2024.findings-naacl.256
DOI:
Bibkey:
Cite (ACL):
Ruixiang Tang, Yu-Neng Chuang, Xuanting Cai, Mengnan Du, and Xia Hu. 2024. Secure Your Model: An Effective Key Prompt Protection Mechanism for Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 4061–4073, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Secure Your Model: An Effective Key Prompt Protection Mechanism for Large Language Models (Tang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.256.pdf
Copyright:
 2024.findings-naacl.256.copyright.pdf