Moohyeon Kim
2024
Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models
Hyungjoo Chae
|
Yeonghyeon Kim
|
Seungone Kim
|
Kai Tzu-iunn Ong
|
Beong-woo Kwak
|
Moohyeon Kim
|
Sunghwan Kim
|
Taeyoon Kwon
|
Jiwan Chung
|
Youngjae Yu
|
Jinyoung Yeo
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Algorithmic reasoning tasks that involve complex logical patterns, such as completing Dyck language, pose challenges for large language models (LLMs), despite their recent success. Prior work has used LLMs to generate programming language and applied external compilers for such tasks. Yet, when on the fly, it is hard to generate an executable code with the correct logic for the solution. Even so, code for one instance cannot be reused for others, although they might require the same logic to solve. We present Think-and-Execute, a novel framework that improves LLMs’ algorithmic reasoning: (1) In Think, we discover task-level logic shared across all instances, and express such logic with pseudocode; (2) In Execute, we tailor the task-level pseudocode to each instance and simulate the execution of it. Think-and-Execute outperforms several strong baselines (including CoT and PoT) in diverse algorithmic reasoning tasks. We manifest the advantage of using task-level pseudocode over generating instance-specific solutions one by one. Also, we show that pseudocode can better improve LMs’ reasoning than natural language (NL) guidance, even though they are trained with NL instructions.
Search
Co-authors
- Hyungjoo Chae 1
- Yeonghyeon Kim 1
- Seungone Kim 1
- Kai Tzu-iunn Ong 1
- Beong-woo Kwak 1
- show all...