Zhuo Chen
Other people with similar names: Zhuo Chen, Zhuo Chen
Unverified author pages with similar names: Zhuo Chen
2025
KBM: Delineating Knowledge Boundary for Adaptive Retrieval in Large Language Models
Zhen Zhang | Xinyu Wang | Yong Jiang | Zile Qiao | Zhuo Chen | Guangyu Li | Feiteng Mu | Mengting Hu | Pengjun Xie | Fei Huang
Findings of the Association for Computational Linguistics: EMNLP 2025
Zhen Zhang | Xinyu Wang | Yong Jiang | Zile Qiao | Zhuo Chen | Guangyu Li | Feiteng Mu | Mengting Hu | Pengjun Xie | Fei Huang
Findings of the Association for Computational Linguistics: EMNLP 2025
Large Language Models (LLMs) often struggle with dynamically changing knowledge and handling unknown static information. Retrieval-Augmented Generation (RAG) is employed to tackle these challenges and has a significant impact on improving LLM performance. In fact, we find that not all questions need to trigger RAG. By retrieving parts of knowledge unknown to the LLM and allowing the LLM to answer the rest, we can effectively reduce both time and computational costs. In our work, we propose a Knowledge Boundary Model (KBM) to express the known/unknown of a given question, and to determine whether a RAG needs to be triggered. Experiments conducted on 11 English and Chinese datasets illustrate that the KBM effectively delineates the knowledge boundary, significantly decreasing the proportion of retrievals required for optimal end-to-end performance. Furthermore, we evaluate the effectiveness of KBM in three complex scenarios: dynamic knowledge, long-tail static knowledge, and multi-hop problems, as well as its functionality as an external LLM plug-in.
Detecting Knowledge Boundary of Vision Large Language Models by Sampling-Based Inference
Zhuo Chen | Xinyu Wang | Yong Jiang | Zhen Zhang | Xinyu Geng | Pengjun Xie | Fei Huang | Kewei Tu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Zhuo Chen | Xinyu Wang | Yong Jiang | Zhen Zhang | Xinyu Geng | Pengjun Xie | Fei Huang | Kewei Tu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Despite the advancements made in Vision Large Language Models (VLLMs), like text Large Language Models (LLMs), they have limitations in addressing questions that require real-time information or are knowledge-intensive. Indiscriminately adopting Retrieval Augmented Generation (RAG) techniques is an effective yet expensive way to enable models to answer queries beyond their knowledge scopes. To mitigate the dependence on retrieval and simultaneously maintain, or even improve, the performance benefits provided by retrieval, we propose a method to detect the knowledge boundary of VLLMs, allowing for more efficient use of techniques like RAG. Specifically, we propose a method with two variants that fine-tune a VLLM on an automatically constructed dataset for boundary identification. Experimental results on various types of Visual Question Answering datasets show that our method successfully depicts a VLLM’s knowledge boundary, based on which we are able to reduce indiscriminate retrieval while maintaining or improving the performance. In addition, we show that the knowledge boundary identified by our method for one VLLM can be used as a surrogate boundary for other VLLMs. Code will be released at https://github.com/Chord-Chen-30/VLLM-KnowledgeBoundary
2024
Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts
Zhuo Chen | Xinyu Wang | Yong Jiang | Pengjun Xie | Fei Huang | Kewei Tu
Findings of the Association for Computational Linguistics: ACL 2024
Zhuo Chen | Xinyu Wang | Yong Jiang | Pengjun Xie | Fei Huang | Kewei Tu
Findings of the Association for Computational Linguistics: ACL 2024
In the era of large language models, applying techniques such as Retrieval Augmented Generation can better address Open-Domain Question-Answering problems. Due to constraints including model sizes and computing resources, the length of context is often limited, and it becomes challenging to empower the model to cover overlong contexts while answering questions from open domains. This paper proposes a general and convenient method to cover longer contexts in Open-Domain Question-Answering tasks. %It leverages a small encoder language model that effectively encodes contexts, and the encoding applies cross-attention with origin inputs.It leverages a small encoder and cross-attention mechanism and effectively encodes contexts. With our method, the original language models can cover several times longer contexts while keeping the computing requirements close to the baseline. Our experiments demonstrate that after fine-tuning, there is improved performance across two held-in datasets, four held-out datasets, and also in two In Context Learning settings. Our code will be released at https://github.com/Alibaba-NLP/Vec-RA-ODQA.
2023
Using Interpretation Methods for Model Enhancement
Zhuo Chen | Chengyue Jiang | Kewei Tu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Zhuo Chen | Chengyue Jiang | Kewei Tu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
In the age of neural natural language processing, there are plenty of works trying to derive interpretations of neural models. Intuitively, when gold rationales exist during training, one can additionally train the model to match its interpretation with the rationales. However, this intuitive idea has not been fully explored. In this paper, we propose a framework of utilizing interpretation methods and gold rationales to enhance models. Our framework is very general in the sense that it can incorporate various interpretation methods. Previously proposed gradient-based methods can be shown as an instance of our framework. We also propose two novel instances utilizing two other types of interpretation methods, erasure/replace-based and extractor-based methods, for model enhancement. We conduct comprehensive experiments on a variety of tasks. Experimental results show that our framework is effective especially in low-resource settings in enhancing models with various interpretation methods, and our two newly-proposed methods outperform gradient-based methods in most settings. Code is available at https://github.com/Chord-Chen-30/UIMER.