Learning to Ask Informative Questions: Enhancing LLMs with Preference Optimization and Expected Information Gain

Davide Mazzaccara, Alberto Testoni, Raffaella Bernardi


Abstract
Questions are essential tools for acquiring the necessary information to complete information-seeking tasks. However, large language models (LLMs), especially open-source models, often perform poorly in generating informative questions, as measured by expected information gain (EIG). In this paper, we propose a method to enhance the informativeness of LLM-generated questions in 20-question game dialogues. We sample multiple questions from the same model (LLaMA 2-Chat 7B) for each game and create pairs of low-EIG and high-EIG questions to apply a Direct Preference Optimization (DPO) algorithm. Our results show that this method produces more effective questions (in terms of EIG), even in domains different from those used to train the DPO model.
Anthology ID:
2024.findings-emnlp.291
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5064–5074
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.291
DOI:
Bibkey:
Cite (ACL):
Davide Mazzaccara, Alberto Testoni, and Raffaella Bernardi. 2024. Learning to Ask Informative Questions: Enhancing LLMs with Preference Optimization and Expected Information Gain. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 5064–5074, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Learning to Ask Informative Questions: Enhancing LLMs with Preference Optimization and Expected Information Gain (Mazzaccara et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.291.pdf