Self-Regulated Sample Diversity in Large Language Models

Mingyue Liu, Jonathan Frawley, Sarah Wyer, Hubert P. H. Shum, Sara Uckelman, Sue Black, Chris Willcocks


Abstract
Sample diversity depends on the task; within mathematics, precision and determinism are paramount, while storytelling thrives on creativity and surprise. This paper presents a simple self-regulating approach where we adjust sample diversity inference parameters dynamically based on the input prompt—in contrast to existing methods that require expensive and inflexible setups, or maintain static values during inference. Capturing a broad spectrum of sample diversities can be formulated as a straightforward self-supervised inference task, which we find significantly improves the quality of responses generically without model retraining or fine-tuning. In particular, our method demonstrates significant improvement in all supercategories of the MMLU multitask benchmark (GPT-3.5: +4.4%, GPT-4: +1.5%), which captures a large variety of difficult tasks covering STEM, the humanities and social sciences.
Anthology ID:
2024.findings-naacl.122
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1891–1899
Language:
URL:
https://aclanthology.org/2024.findings-naacl.122
DOI:
10.18653/v1/2024.findings-naacl.122
Bibkey:
Cite (ACL):
Mingyue Liu, Jonathan Frawley, Sarah Wyer, Hubert P. H. Shum, Sara Uckelman, Sue Black, and Chris Willcocks. 2024. Self-Regulated Sample Diversity in Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 1891–1899, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Self-Regulated Sample Diversity in Large Language Models (Liu et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.122.pdf