Penalty Decoding: Well Suppress the Self-Reinforcement Effect in Open-Ended Text Generation

Wenhong Zhu, Hongkun Hao, Rui Wang


Abstract
The decoding algorithm is critical for open-ended text generation, transforming latent representations into coherent and meaningful outputs. This paper investigates the self-reinforcement effect in text generation and the effectiveness of a repetition penalty to mitigate it. However, determining the optimal repetition penalty value is challenging. To tackle this, we propose a forgetting mechanism that disregards distant tokens, reducing the burden of penalty selection. In addition, we introduce a length penalty to address overly short sentences caused by excessive penalties. Our penalty decoding approach incorporating three strategies helps resolve issues with sampling methods deviating from factual information. Experimental results demonstrate the efficacy of our approach in generating high-quality sentences resembling human output.
Anthology ID:
2023.emnlp-main.78
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1218–1228
Language:
URL:
https://aclanthology.org/2023.emnlp-main.78
DOI:
10.18653/v1/2023.emnlp-main.78
Bibkey:
Cite (ACL):
Wenhong Zhu, Hongkun Hao, and Rui Wang. 2023. Penalty Decoding: Well Suppress the Self-Reinforcement Effect in Open-Ended Text Generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1218–1228, Singapore. Association for Computational Linguistics.
Cite (Informal):
Penalty Decoding: Well Suppress the Self-Reinforcement Effect in Open-Ended Text Generation (Zhu et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.78.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.78.mp4