HeLo: Learning-Free Lookahead Decoding for Conversation Infilling

Ivan Lee, Taylor Berg-Kirkpatrick


Abstract
We propose Heuristic Guided Lookahead Decoding (HeLo), a novel decoding strategy for conversation infilling. Conversation infilling aims to generate a seamless bridge of utterances connecting a given pair of source and target utterances. HeLo does not require fine-tuning or extra models – only the generating model itself. Instead, HeLo leverages a greedy lookahead phase before committing to any token. The HeLo framework is simple and can augment conventional decoding strategies paired with any autoregressive language model. Smooth transitions between utterances are encouraged with an annealing schedule. Our experiments show HeLo outperforms several baselines when evaluated with both automatic and human evaluation metrics, which, we argue, are appropriate for the task.
Anthology ID:
2022.findings-emnlp.367
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4996–5008
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.367
DOI:
10.18653/v1/2022.findings-emnlp.367
Bibkey:
Cite (ACL):
Ivan Lee and Taylor Berg-Kirkpatrick. 2022. HeLo: Learning-Free Lookahead Decoding for Conversation Infilling. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4996–5008, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
HeLo: Learning-Free Lookahead Decoding for Conversation Infilling (Lee & Berg-Kirkpatrick, Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.367.pdf