Daniel Silva
2024
Look Who’s Talking Now: Covert Channels From Biased LLMs
Daniel Silva
|
Frederic Sala
|
Ryan Gabrys
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language model-based steganography encodes hidden messages into model-generated tokens. The key tradeoff is between how much hidden information can be introduced and how much the model can be perturbed. To address this tradeoff, we show how to adapt strategies previously used for LLM watermarking to encode large amounts of information. We tackle the practical (but difficult) setting where we do not have access to the full model when trying to recover the hidden information. Theoretically, we study the fundamental limits in how much steganographic information can be inserted into LLM-created outputs. We provide practical encoding schemes and present experimental results showing that our proposed strategies are nearly optimal.
Search