Rania Abdelghani
2023
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation
Xingdi Yuan
|
Tong Wang
|
Yen-Hsiang Wang
|
Emery Fine
|
Rania Abdelghani
|
Hélène Sauzéon
|
Pierre-Yves Oudeyer
Findings of the Association for Computational Linguistics: ACL 2023
Large Language Models (LLMs) have in recent years demonstrated impressive prowess in natural language generation. A common practice to improve generation diversity is to sample multiple outputs from the model. However, partly due to the inaccessibility of LLMs, there lacks a simple and robust way of selecting the best output from these stochastic samples. As a case study framed in the context of question generation, we propose two prompt-based approaches, namely round-trip and prompt-based score, to selecting high-quality questions from a set of LLM-generated candidates. Our method works without the need to modify the underlying model, nor does it rely on human-annotated references — both of which are realistic constraints for real-world deployment of LLMs. With automatic as well as human evaluations, we empirically demonstrate that our approach can effectively select questions of higher qualities than greedy generation.
Search
Fix data
Co-authors
- Emery Fine 1
- Pierre-Yves Oudeyer 1
- Hélène Sauzéon 1
- Tong Wang 1
- Yen-Hsiang Wang 1
- show all...