First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning

Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Keisuke Sakaguchi, Kentaro Inui


Abstract
Explicit multi-step reasoning, such as chain-of-thought, is widely adopted in the community to explore the better performance of language models (LMs). We report on the systematic strategy that LMs use in this process.Our controlled experiments reveal that LMs rely more heavily on heuristics, such as lexical overlap, in the earlier stages of reasoning when more steps are required to reach an answer. Conversely, their reliance on heuristics decreases as LMs progress closer to the final answer. This suggests that LMs track only a limited number of future steps and dynamically combine heuristic strategies with rational ones in solving tasks involving multi-step reasoning.
Anthology ID:
2024.emnlp-main.789
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14255–14271
Language:
URL:
https://aclanthology.org/2024.emnlp-main.789
DOI:
Bibkey:
Cite (ACL):
Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Keisuke Sakaguchi, and Kentaro Inui. 2024. First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 14255–14271, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning (Aoki et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.789.pdf
Software:
 2024.emnlp-main.789.software.zip