Just Ask One More Time! Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios

Lei Lin, Jiayi Fu, Pengli Liu, Qingyang Li, Yan Gong, Junchen Wan, Fuzheng Zhang, Zhongyuan Wang, Di Zhang, Kun Gai


Abstract
Although chain-of-thought (CoT) prompting combined with language models has achieved encouraging results on complex reasoning tasks, the naive greedy decoding used in CoT prompting usually causes the repetitiveness and local optimality. To address this shortcoming, ensemble-optimization tries to obtain multiple reasoning paths to get the final answer assembly. However, current ensemble-optimization methods either simply employ rule-based post-processing such as self-consistency, or train an additional model based on several task-related human annotations to select the best one among multiple reasoning paths, yet fail to generalize to realistic settings where the type of input questions is unknown or the answer format of reasoning paths is unknown. To avoid their limitations, we propose Self-Agreement, a generalizable ensemble-optimization method applying in almost all scenarios where the type of input questions and the answer format of reasoning paths may be known or unknown. Self-agreement firstly samples from language model’s decoder to generate a diverse set of reasoning paths, and subsequently prompts the language model one more time to determine the optimal answer by selecting the most agreed answer among the sampled reasoning paths. Self-agreement simultaneously achieves remarkable performance on six public reasoning benchmarks and superior generalization capabilities.
Anthology ID:
2024.findings-acl.230
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3829–3852
Language:
URL:
https://aclanthology.org/2024.findings-acl.230
DOI:
10.18653/v1/2024.findings-acl.230
Bibkey:
Cite (ACL):
Lei Lin, Jiayi Fu, Pengli Liu, Qingyang Li, Yan Gong, Junchen Wan, Fuzheng Zhang, Zhongyuan Wang, Di Zhang, and Kun Gai. 2024. Just Ask One More Time! Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios. In Findings of the Association for Computational Linguistics: ACL 2024, pages 3829–3852, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Just Ask One More Time! Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios (Lin et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.230.pdf