Abstraction-of-Thought Makes Language Models Better Reasoners

Ruixin Hong, Hongming Zhang, Xiaoman Pan, Dong Yu, Changshui Zhang


Abstract
Abstract reasoning, the ability to reason from the abstract essence of a problem, serves as a key to generalization in human reasoning. However, eliciting language models to perform reasoning with abstraction remains unexplored. This paper seeks to bridge this gap by introducing a novel structured reasoning format called Abstraction-of-Thought (AoT). The uniqueness of AoT lies in its explicit requirement for varying levels of abstraction within the reasoning process. This approach could elicit language models to first contemplate on the abstract level before incorporating concrete details, which is overlooked by the prevailing step-by-step Chain-of-Thought (CoT) method. To align models with the AoT format, we present AoT Collection, a generic finetuning dataset consisting of 348k high-quality samples with AoT reasoning processes, collected via an automated and scalable pipeline. We finetune a wide range of language models with AoT Collection and conduct extensive evaluations on 23 unseen tasks from the challenging benchmark Big-Bench Hard. Experimental results indicate that models aligned to AoT reasoning format substantially outperform those aligned to CoT in many reasoning tasks.
Anthology ID:
2024.findings-emnlp.110
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1993–2027
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.110
DOI:
Bibkey:
Cite (ACL):
Ruixin Hong, Hongming Zhang, Xiaoman Pan, Dong Yu, and Changshui Zhang. 2024. Abstraction-of-Thought Makes Language Models Better Reasoners. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 1993–2027, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Abstraction-of-Thought Makes Language Models Better Reasoners (Hong et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.110.pdf