Zhanqiu Zhang
2024
Unveiling and Consulting Core Experts in Retrieval-Augmented MoE-based LLMs
Xin Zhou
|
Ping Nie
|
Yiwen Guo
|
Haojie Wei
|
Zhanqiu Zhang
|
Pasquale Minervini
|
Ruotian Ma
|
Tao Gui
|
Qi Zhang
|
Xuanjing Huang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Retrieval-Augmented Generation (RAG) significantly improved the ability of Large Language Models (LLMs) to solve knowledge-intensive tasks. While existing research seeks to enhance RAG performance by retrieving higher-quality documents or designing RAG-specific LLMs, the internal mechanisms within LLMs that contribute to RAG’s effectiveness remain underexplored. In this paper, we aim to investigate these internal mechanisms within the popular Mixture-of-Expert (MoE)-based LLMs and demonstrate how to improve RAG by examining expert activations in these LLMs. Our controlled experiments reveal that several core groups of experts are primarily responsible for RAG-related behaviors. The activation of these core experts can signify the model’s inclination towards external/internal knowledge and adjust its behavior. For instance, we identify core experts that can (1) indicate the sufficiency of the model’s internal knowledge, (2) assess the quality of retrieved documents, and (3) enhance the model’s ability to utilize context. Based on these findings, we propose several strategies to enhance RAG’s efficiency and effectiveness through expert activation. Experimental results across various datasets and MoE LLMs show the effectiveness of our method.
2021
Deep Cognitive Reasoning Network for Multi-hop Question Answering over Knowledge Graphs
Jianyu Cai
|
Zhanqiu Zhang
|
Feng Wu
|
Jie Wang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Search
Co-authors
- Xin Zhou 1
- Ping Nie 1
- Yiwen Guo 1
- Haojie Wei 1
- Pasquale Minervini 1
- show all...