Impact of Decoding Methods on Human Alignment of Conversational LLMs

Shaz Furniturewala, Kokil Jaidka, Yashvardhan Sharma


Abstract
To be included into chatbot systems, Large language models (LLMs) must be aligned with human conversational conventions. However, being trained mainly on web-scraped data gives existing LLMs a voice closer to informational text than actual human speech. In this paper, we examine the effect of decoding methods on the alignment between LLM-generated and human conversations, including Beam Search, Top K Sampling, and Nucleus Sampling. We present new measures of alignment in substance, style, and psychometric orientation, and experiment with two conversation datasets. Our results provide subtle insights: better alignment is attributed to fewer beams in Beam Search and lower values of P in Nucleus Sampling. We also find that task-oriented and open-ended datasets perform differently in terms of alignment, indicating the significance of taking into account the context of the interaction.
Anthology ID:
2024.wassa-1.22
Volume:
Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Orphée De Clercq, Valentin Barriere, Jeremy Barnes, Roman Klinger, João Sedoc, Shabnam Tafreshi
Venues:
WASSA | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
273–279
Language:
URL:
https://aclanthology.org/2024.wassa-1.22
DOI:
Bibkey:
Cite (ACL):
Shaz Furniturewala, Kokil Jaidka, and Yashvardhan Sharma. 2024. Impact of Decoding Methods on Human Alignment of Conversational LLMs. In Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis, pages 273–279, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Impact of Decoding Methods on Human Alignment of Conversational LLMs (Furniturewala et al., WASSA-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wassa-1.22.pdf