Yuanchen Bei
2025
A Survey of RAG-Reasoning Systems in Large Language Models
Yangning Li
|
Weizhi Zhang
|
Yuyao Yang
|
Wei-Chieh Huang
|
Yaozu Wu
|
Junyu Luo
|
Yuanchen Bei
|
Henry Peng Zou
|
Xiao Luo
|
Yusheng Zhao
|
Chunkit Chan
|
Yankai Chen
|
Zhongfen Deng
|
Yinghui Li
|
Hai-Tao Zheng
|
Dongyuan Li
|
Renhe Jiang
|
Ming Zhang
|
Yangqiu Song
|
Philip S. Yu
Findings of the Association for Computational Linguistics: EMNLP 2025
Retrieval-Augmented Generation (RAG) lifts the factuality of Large Language Models (LLMs) by injecting external knowledge, yet it falls short on problems that demand multi-step inference; conversely, purely reasoning-oriented approaches often hallucinate or mis-ground facts. This survey synthesizes both strands under a unified reasoning-search perspective. We first map how advanced reasoning optimizes each stage of RAG (Reasoning-Enhanced RAG). Then, we show how retrieved knowledge of different type supply missing premises and expand context for complex inference (RAG-Enhanced Reasoning). Finally, we spotlight emerging Synergized RAG-Reasoning frameworks, where (agentic) LLMs iteratively interleave search and thought to achieve state-of-the-art performance across knowledge-intensive benchmarks. We categorize methods, datasets, and open challenges, and outline research avenues toward deeper RAG-Reasoning systems that are more effective, multimodally-adaptive, trustworthy, and human-centric.
Search
Fix author
Co-authors
- Chunkit Chan 1
- Yankai Chen 1
- Zhongfen Deng 1
- Wei-Chieh Huang 1
- Renhe Jiang 1
- show all...