Trading Off Diversity and Quality in Natural Language Generation

Hugh Zhang, Daniel Duckworth, Daphne Ippolito, Arvind Neelakantan


Abstract
For open-ended language generation tasks such as storytelling or dialogue, choosing the right decoding algorithm is vital for controlling the tradeoff between generation quality and diversity. However, there presently exists no consensus on which decoding procedure is best or even the criteria by which to compare them. In this paper, we cast decoding as a tradeoff between response quality and diversity, and we perform the first large-scale evaluation of decoding methods along the entire quality-diversity spectrum. Our experiments confirm the existence of the likelihood trap: the counter-intuitive observation that high likelihood sequences are often surprisingly low quality. We also find that when diversity is a priority, all methods perform similarly, but when quality is viewed as more important, nucleus sampling (Holtzman et al., 2019) outperforms all other evaluated decoding algorithms.
Anthology ID:
2021.humeval-1.3
Volume:
Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval)
Month:
April
Year:
2021
Address:
Online
Venues:
EACL | HumEval
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25–33
Language:
URL:
https://aclanthology.org/2021.humeval-1.3
DOI:
Bibkey:
Cite (ACL):
Hugh Zhang, Daniel Duckworth, Daphne Ippolito, and Arvind Neelakantan. 2021. Trading Off Diversity and Quality in Natural Language Generation. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), pages 25–33, Online. Association for Computational Linguistics.
Cite (Informal):
Trading Off Diversity and Quality in Natural Language Generation (Zhang et al., HumEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.humeval-1.3.pdf
Video:
 https://www.youtube.com/watch?v=P0SWVm30MFM