Dongzhuoran Zhou
2026
What Breaks Knowledge Graph based RAG? Benchmarking and Empirical Insights into Reasoning under Incomplete Knowledge
Dongzhuoran Zhou | Yuqicheng Zhu | Xiaxia Wang | Hongkuan Zhou | Yuan He | Jiaoyan Chen | Steffen Staab | Evgeny Kharlamov
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Dongzhuoran Zhou | Yuqicheng Zhu | Xiaxia Wang | Hongkuan Zhou | Yuan He | Jiaoyan Chen | Steffen Staab | Evgeny Kharlamov
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Knowledge Graph-based Retrieval-Augmented Generation (KG-RAG) is an increasingly explored approach for combining the reasoning capabilities of large language models with the structured evidence of knowledge graphs. However, current evaluation practices fall short: existing benchmarks often include questions that can be directly answered using existing triples in KG, making it unclear whether models perform reasoning or simply retrieve answers directly. Moreover, inconsistent evaluation metrics and lenient answer matching criteria further obscure meaningful comparisons. In this work, we introduce a general method for constructing benchmarks and present BRINK (Benchmark for Reasoning under Incomplete Knowledge) to systematically assess KG-RAG methods under knowledge incompleteness. Our empirical results show that current KG-RAG methods have limited reasoning ability under missing knowledge, often rely on internal memorization, and exhibit varying degrees of generalization depending on their design.
2025
GQC: LLM-Based Grouped QA Consolidation for Open-Domain Fact Verification at AVeriTeC
Dongzhuoran Zhou | Roxana Pop | Yuqicheng Zhu | Evgeny Kharlamov
Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER)
Dongzhuoran Zhou | Roxana Pop | Yuqicheng Zhu | Evgeny Kharlamov
Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER)
Structured fact verification benchmarks like AVeriTeC decompose claims into QA pairs to support fine-grained reasoning. However, current systems generate QA pairs independently for each evidence sentence, leading to redundancy, drift, and noise. We introduce a modular LLM-based QA consolidation module that jointly filters, clusters, and rewrites QA pairs at the claim level. Experiments show that this method improves evidence quality and veracity prediction accuracy. Our analysis also highlights the impact of model scale and alignment on downstream performance.