Bingchen Zhong


2025

pdf bib
Reasoning Graph Enhanced Exemplars Retrieval for In-Context Learning
Yukang Lin | Bingchen Zhong | Shuoran Jiang | Joanna Siebert | Qingcai Chen
Proceedings of the 31st International Conference on Computational Linguistics

Large language models (LLMs) have exhibited remarkable few-shot learning capabilities and unified the paradigm of NLP tasks through the in-context learning (ICL) technique. Despite the success of ICL, the quality of the exemplar demonstrations can significantly influence the LLM’s performance. Existing exemplar selection methods mainly focus on the semantic similarity between queries and candidate exemplars. On the other hand, the logical connections between reasoning steps can also be beneficial to depict the problem-solving process. This paper proposes a novel method named Reasoning Graph-enhanced Exemplar Retrieval (RGER). RGER first queries LLM to generate an initial response and then expresses intermediate problem-solving steps to a graph structure. After that, it employs a graph kernel to select exemplars with semantic and structural similarity. Extensive experiments demonstrate the structural relationship is helpful to the alignment of queries and candidate exemplars. The efficacy of RGER on mathematics and logical reasoning tasks showcases its superiority over state-of-the-art retrieval-based approaches.