Tarun Raheja
2024
Enhancing Large Language Models through Transforming Reasoning Problems into Classification Tasks
Tarun Raheja
|
Raunak Sinha
|
Advit Deepak
|
Will Healy
|
Jayanth Srinivasa
|
Myungjin Lee
|
Ramana Kompella
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
In this paper, we introduce a novel approach for enhancing the reasoning capabilities of large language models (LLMs) for constraint satisfaction problems (CSPs), by converting reasoning problems into classification tasks. Our method leverages the LLM’s ability to decide when to call a function from a set of logical-linguistic primitives, each of which can interact with a local “scratchpad” memory and logical inference engine. Invocation of these primitives in the correct order writes the constraints to the scratchpad memory and enables the logical engine to verifiably solve the problem. We additionally propose a formal framework for exploring the “linguistic” hardness of CSP reasoning-problems for LLMs. Our experimental results demonstrate that under our proposed method, tasks with significant computational hardness can be converted to a form that is easier for LLMs to solve and yields a 40% improvement over baselines. This opens up new avenues for future research into hybrid cognitive models that integrate symbolic and neural approaches.
Search
Co-authors
- Raunak Sinha 1
- Advit Deepak 1
- Will Healy 1
- Jayanth Srinivasa 1
- Myungjin Lee 1
- show all...