Chaozheng Wang
2024
Split and Merge: Aligning Position Biases in LLM-based Evaluators
Zongjie Li
|
Chaozheng Wang
|
Pingchuan Ma
|
Daoyuan Wu
|
Shuai Wang
|
Cuiyun Gao
|
Yang Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have shown promise as automated evaluators for assessing the quality of answers generated by AI systems. However, LLM-based evaluators exhibit position bias, or inconsistency, when used to evaluate candidate answers in pairwise comparisons, favoring either the first or second answer regardless of content. To address this limitation, we propose PORTIA, an alignment-based system designed to mimic human comparison strategies to calibrate position bias in a lightweight yet effective manner. Specifically, PORTIA splits the answers into multiple segments, taking into account both length and semantics, and merges them back into a single prompt for evaluation by LLMs. Extensive experiments with six LLMs on 11,520 answer pairs demonstrate that PORTIA markedly enhances the consistency rates for all models and forms of comparison tested, achieving an average relative improvement of 47.46%. It also enables PORTIA-enhanced GPT-3.5 to achieve agreement rates with humans comparable to GPT-4 and elevates GPT-4’s consistency rate up to 98%. Subsequent human evaluations indicate that the PORTIA-enhanced GPT-3.5 model can even surpass standalone GPT-4 in terms of alignment with human evaluators, highlighting PORTIA’s ability to correct position bias, improve LLM consistency, and boost performance while keeping cost efficiency.
XMoE: Sparse Models with Fine-grained and Adaptive Expert Selection
Yuanhang Yang
|
Shiyi Qi
|
Wenchao Gu
|
Chaozheng Wang
|
Cuiyun Gao
|
Zenglin Xu
Findings of the Association for Computational Linguistics: ACL 2024
Sparse models, including sparse Mixture-of-Experts (MoE) models, have emerged as an effective approach for scaling Transformer models. However, they often suffer from computational inefficiency since a significant number of parameters are unnecessarily involved in computations by multiplying values by zero or low activation values. To address this issue, we present XMoE, a novel MoE designed to enhance both the efficacy and efficiency of sparse MoE models. XMoE leverages small experts and a threshold-based router to enable tokens to selectively engage only essential parameters. Our extensive experiments on language modeling and machine translation tasks demonstrate that enhances model performance and can decrease the computation load at MoE layers by over 50% without sacrificing performance. Furthermore, we present the versatility of by applying it to dense models, enabling sparse computation during inference. We provide a comprehensive analysis and make our code available at https://anonymous.4open.science/r/XMoE.
Search
Co-authors
- Cuiyun Gao 2
- Zongjie Li 1
- Pingchuan Ma 1
- Daoyuan Wu 1
- Shuai Wang 1
- show all...