Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting

Xi Ye, Greg Durrett


Abstract
Recent work has shown how to prompt large language models with explanations to obtain strong performance on textual reasoning tasks, i.e., the chain-of-thought paradigm. However, subtly different explanations can yield widely varying downstream task accuracy. Explanations that have not been “tuned” for a task, such as off-the-shelf explanations written by non-experts, may lead to mediocre performance. This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion. We first generate sets of candidate explanations for each example in the prompt using a leave-one-out scheme, then find an effective combination of these explanations with a two-stage framework. We first evaluate explanations for each in-context example in isolation according to two proxy metrics, log likelihood and accuracy on new examples. Then, we search over combinations of explanations to find one that yields high performance against a silver-labeled development set. Across four textual reasoning tasks spanning question answering, mathematical reasoning, and natural language inference, results show that our proxy metrics correlate with ground truth accuracy and our overall method can effectively improve prompts over crowdworker annotations and naive search strategies
Anthology ID:
2023.emnlp-main.41
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
619–637
Language:
URL:
https://aclanthology.org/2023.emnlp-main.41
DOI:
10.18653/v1/2023.emnlp-main.41
Bibkey:
Cite (ACL):
Xi Ye and Greg Durrett. 2023. Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 619–637, Singapore. Association for Computational Linguistics.
Cite (Informal):
Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting (Ye & Durrett, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.41.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.41.mp4