A Test Suite of Prompt Injection Attacks for LLM-based Machine Translation

Antonio Valerio Miceli Barone, Zhifan Sun


Abstract
LLM-based NLP systems typically work by embedding their input data into prompt templates which contain instructions and/or in-context examples, creating queries which are submitted to a LLM, then parse the LLM response in order to generate the system outputs. Prompt Injection Attacks (PIAs) are a type of subversion of these systems where a malicious user crafts special inputs which interfer with the prompt templates, causing the LLM to respond in ways unintended by the system designer.Recently, Sun and Miceli-Barone (2024) proposed a class of PIAs against LLM-based machine translation. Specifically, the task is to translate questions from the TruthfulQA test suite, where an adversarial prompt is prepended to the questions, instructing the system to ignore the translation instruction and answer the questions instead.In this test suite we extend this approach to all the language pairs of the WMT 2024 General Machine Translation task. Moreover, we include additional attack formats in addition to the one originally studied.
Anthology ID:
2024.wmt-1.30
Volume:
Proceedings of the Ninth Conference on Machine Translation
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Barry Haddow, Tom Kocmi, Philipp Koehn, Christof Monz
Venue:
WMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
380–450
Language:
URL:
https://aclanthology.org/2024.wmt-1.30
DOI:
Bibkey:
Cite (ACL):
Antonio Valerio Miceli Barone and Zhifan Sun. 2024. A Test Suite of Prompt Injection Attacks for LLM-based Machine Translation. In Proceedings of the Ninth Conference on Machine Translation, pages 380–450, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
A Test Suite of Prompt Injection Attacks for LLM-based Machine Translation (Miceli Barone & Sun, WMT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wmt-1.30.pdf