WaterJudge: Quality-Detection Trade-off when Watermarking Large Language Models

Piotr Molenda, Adian Liusie, Mark Gales


Abstract
Watermarking generative-AI systems, such as LLMs, has gained considerable interest, driven by their enhanced capabilities across a wide range of tasks. Although current approaches have demonstrated that small, context-dependent shifts in the word distributions can be used to apply and detect watermarks, there has been little work in analyzing the impact that these perturbations have on the quality of generated texts. Balancing high detectability with minimal performance degradation is crucial in terms of selecting the appropriate watermarking setting; therefore this paper proposes a simple analysis framework where comparative assessment, a flexible NLG evaluation framework, is used to assess the quality degradation caused by a particular watermark setting. We demonstrate that our framework provides easy visualization of the quality-detection trade-off of watermark settings, enabling a simple solution to find an LLM watermark operating point that provides a well-balanced performance. This approach is applied to two different summarization systems and a translation system, enabling cross-model analysis for a task, and cross-task analysis.
Anthology ID:
2024.findings-naacl.223
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3515–3525
Language:
URL:
https://aclanthology.org/2024.findings-naacl.223
DOI:
Bibkey:
Cite (ACL):
Piotr Molenda, Adian Liusie, and Mark Gales. 2024. WaterJudge: Quality-Detection Trade-off when Watermarking Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3515–3525, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
WaterJudge: Quality-Detection Trade-off when Watermarking Large Language Models (Molenda et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.223.pdf
Copyright:
 2024.findings-naacl.223.copyright.pdf