Yukun Huang
2024
Atomic Self-Consistency for Better Long Form Generations
Raghuveer Thirukovalluru
|
Yukun Huang
|
Bhuwan Dhingra
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Recent work has aimed to improve LLM generations by filtering out hallucinations, thereby improving the precision of the information in responses. Correctness of a long-form response, however, also depends on the recall of multiple pieces of information relevant to the question. In this paper, we introduce Atomic Self-Consistency (ASC), a technique for improving the recall of relevant information in an LLM response. ASC follows recent work, Universal Self-Consistency (USC) in using multiple stochastic samples from an LLM to improve the long-form response. Unlike USC which only focuses on selecting the best single generation, ASC picks authentic subparts from the samples and merges them into a superior composite answer. Through extensive experiments and ablations, we show that merging relevant subparts of multiple samples performs significantly better than picking a single sample. ASC demonstrates significant gains over USC on multiple factoids and open-ended QA datasets - ASQA, QAMPARI, QUEST, ELI5 with ChatGPT and Llama3. Our analysis also reveals untapped potential for enhancing long-form generations using the approach of merging multiple samples.
Calibrating Long-form Generations From Large Language Models
Yukun Huang
|
Yixin Liu
|
Raghuveer Thirukovalluru
|
Arman Cohan
|
Bhuwan Dhingra
Findings of the Association for Computational Linguistics: EMNLP 2024
To enhance Large Language Models’ (LLMs) reliability, calibration is essential—the model’s confidence scores should align with the likelihood of its responses being correct. However, traditional calibration methods typically rely on a binary true/false assessment of response correctness, unsuitable for long-form generations where an answer can be partially correct. Addressing this gap, we introduce a unified calibration framework, in which both the correctness of the LLMs’ responses and their associated confidence levels are treated as distributions across a range of scores. We develop three metrics for assessing LLM calibration and propose confidence elicitation methods based on self-consistency and self-evaluation. Our experiments demonstrate that larger models don’t necessarily guarantee better calibration, that various calibration metrics complement each other, and that self-consistency methods excel in factoid datasets. We also find that calibration can be enhanced through techniques such as fine-tuning, scaling the temperature. Finally, we illustrate one application of long-form calibration through selective answering in long-form responses, optimizing correctness within a constrained API budget.
2023
Learning a Better Initialization for Soft Prompts via Meta-Learning
Yukun Huang
|
Kun Qian
|
Zhou Yu
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Search
Co-authors
- Raghuveer Thirukovalluru 2
- Bhuwan Dhingra 2
- Yixin Liu 1
- Arman Cohan 1
- Kun Qian 1
- show all...
- Zhou Yu 1