Yanhua Huang
2025
Dynamic Collaboration of Multi-Language Models based on Minimal Complete Semantic Units
Chao Hao
|
Zezheng Wang
|
Yanhua Huang
|
Ruiwen Xu
|
Wenzhe Niu
|
Xin Liu
|
Zitong Yu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
This paper investigates the enhancement of reasoning capabilities in language models through token-level multi-model collaboration. Our approach selects the optimal tokens from the next token distributions provided by multiple models to perform autoregressive reasoning. Contrary to the assumption that more models yield better results, we introduce a distribution distance-based dynamic selection strategy (DDS) to optimize the multi-model collaboration process. To address the critical challenge of vocabulary misalignment in multi-model collaboration, we propose the concept of minimal complete semantic units (MCSU), which is simple yet enables multiple language models to achieve natural alignment within the linguistic space. Experimental results across various benchmarks demonstrate the superiority of our method. The codes will be released soon.
SEARA: An Automated Approach for Obtaining Optimal Retrievers
Zou Yuheng
|
Wang Yiran Yiran
|
Tian Yuzhu
|
Zhu Min
|
Yanhua Huang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Retrieval-Augmented Generation (RAG) is a core approach for enhancing Large Language Models (LLMs), where the effectiveness of the retriever largely determines the overall response quality of RAG systems. Retrievers encompass a multitude of hyperparameters that significantly impact performance outcomes and demonstrate sensitivity to specific applications. Nevertheless, hyperparameter optimization entails prohibitively high computational expenses. Existing evaluation methods suffer from either prohibitive costs or disconnection from domain-specific scenarios. This paper proposes SEARA (Subset sampling Evaluation for Automatic Retriever Assessment), which addresses evaluation data challenges through subset sampling techniques and achieves robust automated retriever evaluation by minimal retrieval facts extraction and comprehensive retrieval metrics. Based on real user queries, this method enables fully automated retriever evaluation at low cost, thereby obtaining optimal retriever for specific business scenarios. We validate our method across classic RAG applications in rednote, including knowledge-based Q&A system and retrieval-based travel assistant, successfully obtaining scenario-specific optimal retrievers.
Search
Fix author
Co-authors
- Chao Hao 1
- Xin Liu (刘鑫) 1
- Zhu Min 1
- Wenzhe Niu 1
- Zezheng Wang 1
- show all...