Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System

Daphne Ippolito, Nicholas Carlini, Katherine Lee, Milad Nasr, Yun William Yu


Abstract
Neural language models are increasingly deployed into APIs and websites that allow a user to pass in a prompt and receive generated text. Many of these systems do not reveal generation parameters. In this paper, we present methods to reverse-engineer the decoding method used to generate text (i.e., top-_k_ or nucleus sampling). Our ability to discover which decoding strategy was used has implications for detecting generated text. Additionally, the process of discovering the decoding strategy can reveal biases caused by selecting decoding settings which severely truncate a model’s predicted distributions. We perform our attack on several families of open-source language models, as well as on production systems (e.g., ChatGPT).
Anthology ID:
2023.inlg-main.28
Volume:
Proceedings of the 16th International Natural Language Generation Conference
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
C. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Venues:
INLG | SIGDIAL
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
396–406
Language:
URL:
https://aclanthology.org/2023.inlg-main.28
DOI:
10.18653/v1/2023.inlg-main.28
Bibkey:
Cite (ACL):
Daphne Ippolito, Nicholas Carlini, Katherine Lee, Milad Nasr, and Yun William Yu. 2023. Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System. In Proceedings of the 16th International Natural Language Generation Conference, pages 396–406, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System (Ippolito et al., INLG-SIGDIAL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.inlg-main.28.pdf