2025
pdf
bib
abs
ExpandR: Teaching Dense Retrievers Beyond Queries with LLM Guidance
Sijia Yao
|
Pengcheng Huang
|
Zhenghao Liu
|
Yu Gu
|
Yukun Yan
|
Shi Yu
|
Ge Yu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have demonstrated significant potential in enhancing dense retrieval through query augmentation. However, most existing methods treat the LLM and the retriever as separate modules, overlooking the alignment between generation and ranking objectives. In this work, we propose ExpandR, a unified LLM-augmented dense retrieval framework that jointly optimizes both the LLM and the retriever. ExpandR employs the LLM to generate semantically rich query expansions, which are leveraged to enhance the retriever’s training. Simultaneously, the LLM is trained using Direct Preference Optimization (DPO), guided by a carefully designed reward function that balances retrieval effectiveness and generation consistency. This joint optimization paradigm enables mutual adaptation between the LLM and the retriever, resulting in query expansions that are both informative and well-suited for retrieval. Experimental results on multiple benchmarks show that ExpandR consistently outperforms strong baselines, achieving more than a 5% improvement in retrieval performance. All codes are available at https://github.com/NEUIR/ExpandR.
pdf
bib
abs
Position IDs Matter: An Enhanced Position Layout for Efficient Context Compression in Large Language Models
Runsong Zhao
|
Xin Liu
|
Xinyu Liu
|
Pengcheng Huang
|
Chunyang Xiao
|
Tong Xiao
|
JingBo Zhu
Findings of the Association for Computational Linguistics: EMNLP 2025
Using special tokens (e.g., gist, memory, or compressed tokens) to compress context information is a common practice for large language models (LLMs). However, existing approaches often neglect that position encodings inherently induce local inductive biases in models, causing the compression process to ignore holistic contextual dependencies. We propose **Enhanced Position Layout (EPL)**, a simple yet effective method that improves the context compression capability of LLMs by only adjusting position IDs, the numerical identifiers that specify token positions. EPL minimizes the distance between context tokens and their corresponding special tokens and at the same time maintains the sequence order in position IDs between context tokens, special tokens, and the subsequent tokens. Integrating EPL into our best performing context compression model results in 1.9 ROUGE-1 F1 improvement on out-of-domain question answering datasets in average. When extended to multimodal scenarios, EPL brings an average accuracy gain of 2.6 to vision compression LLMs.
pdf
bib
abs
ClueAnchor: Clue-Anchored Knowledge Reasoning Exploration and Optimization for Retrieval-Augmented Generation
Hao Chen
|
Yukun Yan
|
Sen Mei
|
Wanxiang Che
|
Zhenghao Liu
|
Qi Shi
|
Xinze Li
|
Yuchun Fan
|
Pengcheng Huang
|
Qiushi Xiong
|
Zhiyuan Liu
|
Maosong Sun
Findings of the Association for Computational Linguistics: EMNLP 2025
Retrieval-Augmented Generation (RAG) augments Large Language Models (LLMs) with external knowledge to improve factuality. However, existing RAG systems frequently underutilize the retrieved documents, failing to extract and integrate the key clues needed to support faithful and interpretable reasoning, especially in cases where relevant evidence is implicit, scattered, or obscured by noise. To address this issue, we propose ClueAnchor, a novel framework for enhancing RAG via clue-anchored reasoning exploration and optimization. ClueAnchor extracts key clues from retrieved content and generates multiple reasoning paths based on different knowledge configurations, optimizing the model by selecting the most appropriate reasoning path for the given context through reward-based preference optimization. Experiments show that ClueAnchor significantly outperforms prior RAG baselines in the completeness and robustness of reasoning. Further analysis confirms its strong resilience to noisy or partially relevant retrieved content, as well as its capability to identify supporting evidence even in the absence of explicit clue supervision during inference. All codes are available at https://github.com/thunlp/ClueAnchor.
2024
pdf
bib
abs
Translate-and-Revise: Boosting Large Language Models for Constrained Translation
Pengcheng Huang
|
Yongyu Mu
|
Yuzhang Wu
|
Bei Li
|
Chunyang Xiao
|
Tong Xiao
|
Zhu Jingbo
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Imposing constraints on machine translation systems presents a challenging issue because thesesystems are not trained to make use of constraints in generating adequate, fluent translations. Inthis paper, we leverage the capabilities of large language models (LLMs) for constrained trans-lation, given that LLMs can easily adapt to this task by taking translation instructions and con-straints as prompts. However, LLMs cannot always guarantee the adequacy of translation, and,in some cases, ignore the given constraints. This is in part because LLMs might be overly confi-dent in their predictions, overriding the influence of the constraints. To overcome this overidingbehaviour, we propose to add a revision process that encourages LLMs to correct the outputs byprompting them about the constraints that have not yet been met. We evaluate our approach onfour constrained translation tasks, encompassing both lexical and structural constraints in mul-tiple constraint domains. Experiments show 15% improvement in constraint-based translationaccuracy over standard LLMs and the approach also significantly outperforms neural machinetranslation (NMT) state-of-the-art methods.IntroductionConstrained translation seeks to generate translations that adhere to pre-specified constraints. Toachieve this, conventional approaches impose constraints on machine translation systems and force themto follow the constraints during inference (Hokamp and Liu, 2017; Hasler et al., 2018; Dinu et al., 2019;Bergmanis and Pinnis, 2021b; Wang et al., 2022b; Ailem et al., 2022). More recently, large languagemodels (LLMs) have been shown to be strong translation systems (Hendy et al., 2023; Moslem et al.,2023). They provide a general way to involve various instructions, demonstrations, and constraints intothe translation process (Mu et al., 2023; Bogoychev and Chen, 2023), enabling us to perform constrainedtranslation using off-the-shelf, well-trained LLMs.”
pdf
bib
abs
Forgetting Curve: A Reliable Method for Evaluating Memorization Capability for Long-Context Models
Xinyu Liu
|
Runsong Zhao
|
Pengcheng Huang
|
Chunyang Xiao
|
Bei Li
|
Jingang Wang
|
Tong Xiao
|
JingBo Zhu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Numerous recent works target to extend effective context length for language models and various methods, tasks and benchmarks exist to measure model’s effective memory length. However, through thorough investigations, we find limitations for currently existing evaluations on model’s memory. We provide an extensive survey for limitations in this work and propose a new method called forgetting curve to measure the memorization capability of long-context models. We show that forgetting curve has the advantage of being robust to the tested corpus and the experimental settings, of not relying on prompt and can be applied to any model size. We apply our forgetting curve to a large variety of models involving both transformer and RNN/SSM based architectures. Our measurement provides empirical evidence for the effectiveness of transformer extension techniques while raises questions for the effective length of RNN/SSM based models. We also examine the difference between our measurement and existing benchmarks as well as popular metrics for various models.