Exploring the Role of Reasoning Structures for Constructing Proofs in Multi-Step Natural Language Reasoning with Large Language Models

Zi’ou Zheng, Christopher Malon, Martin Min, Xiaodan Zhu


Abstract
When performing complex multi-step reasoning tasks, the ability of Large Language Models (LLMs) to derive structured intermediate proof steps is important for ensuring that the models truly perform the desired reasoning and for improving models’ explainability. This paper is centred around a focused study: whether the current state-of-the-art generalist LLMs can leverage the structures in a few examples to better construct the proof structures with in-context learning. Our study specifically focuses on structure-aware demonstration and structure-aware pruning. We demonstrate that they both help improve performance. A detailed analysis is provided to help understand the results.
Anthology ID:
2024.emnlp-main.854
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15299–15312
Language:
URL:
https://aclanthology.org/2024.emnlp-main.854
DOI:
Bibkey:
Cite (ACL):
Zi’ou Zheng, Christopher Malon, Martin Min, and Xiaodan Zhu. 2024. Exploring the Role of Reasoning Structures for Constructing Proofs in Multi-Step Natural Language Reasoning with Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15299–15312, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Exploring the Role of Reasoning Structures for Constructing Proofs in Multi-Step Natural Language Reasoning with Large Language Models (Zheng et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.854.pdf