Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications

Junlin Wang, Tianyi Yang, Roy Xie, Bhuwan Dhingra


Abstract
With the proliferation of LLM-integrated applications such as GPT-s, millions are deployed, offering valuable services through proprietary instruction prompts. These systems, however, are prone to prompt extraction attacks through meticulously designed queries. To help mitigate this problem, we introduce the Raccoon benchmark which comprehensively evaluates a model’s susceptibility to prompt extraction attacks. Our novel evaluation method assesses models under both defenseless and defended scenarios, employing a dual approach to evaluate the effectiveness of existing defenses and the resilience of the models. The benchmark encompasses 14 categories of prompt extraction attacks, with additional compounded attacks that closely mimic the strategies of potential attackers, alongside a diverse collection of defense templates. This array is, to our knowledge, the most extensive compilation of prompt theft attacks and defense mechanisms to date. Our findings highlight universal susceptibility to prompt theft in the absence of defenses, with OpenAI models demonstrating notable resilience when protected. This paper aims to establish a more systematic benchmark for assessing LLM robustness against prompt extraction attacks, offering insights into their causes and potential countermeasures.
Anthology ID:
2024.findings-acl.791
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13349–13365
Language:
URL:
https://aclanthology.org/2024.findings-acl.791
DOI:
10.18653/v1/2024.findings-acl.791
Bibkey:
Cite (ACL):
Junlin Wang, Tianyi Yang, Roy Xie, and Bhuwan Dhingra. 2024. Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications. In Findings of the Association for Computational Linguistics: ACL 2024, pages 13349–13365, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.791.pdf