Look Who’s Talking Now: Covert Channels From Biased LLMs

Daniel Silva, Frederic Sala, Ryan Gabrys


Abstract
Large language model-based steganography encodes hidden messages into model-generated tokens. The key tradeoff is between how much hidden information can be introduced and how much the model can be perturbed. To address this tradeoff, we show how to adapt strategies previously used for LLM watermarking to encode large amounts of information. We tackle the practical (but difficult) setting where we do not have access to the full model when trying to recover the hidden information. Theoretically, we study the fundamental limits in how much steganographic information can be inserted into LLM-created outputs. We provide practical encoding schemes and present experimental results showing that our proposed strategies are nearly optimal.
Anthology ID:
2024.findings-emnlp.971
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16648–16658
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.971
DOI:
Bibkey:
Cite (ACL):
Daniel Silva, Frederic Sala, and Ryan Gabrys. 2024. Look Who’s Talking Now: Covert Channels From Biased LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 16648–16658, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Look Who’s Talking Now: Covert Channels From Biased LLMs (Silva et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.971.pdf