2024
pdf
bib
abs
Harder Task Needs More Experts: Dynamic Routing in MoE Models
Quzhe Huang
|
Zhenwei An
|
Nan Zhuang
|
Mingxu Tao
|
Chen Zhang
|
Yang Jin
|
Kun Xu
|
Kun Xu
|
Liwei Chen
|
Songfang Huang
|
Yansong Feng
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In this paper, we introduce a novel dynamic expert selection framework for Mixture of Experts (MoE) models, aiming to enhance computational efficiency and model performance by adjusting the number of activated experts based on input difficulty. Unlike existing MoE approaches that rely on fixed TopK Routing, which activates a predetermined number of experts regardless of the input’s complexity, our method dynamically allocates experts based on the confidence level in expert selection for each input. This allows for more efficient utilization of computational resources, activating more experts for complex tasks requiring advanced reasoning and fewer for simpler tasks. Through extensive evaluations, our dynamic routing method demonstrates substantial improvements over Top2 Routing across various benchmarks, achieving an average improvement of 0.7% with less than 90% activated parameters. Further analysis shows our model dispatches more experts to tasks requiring complex reasoning skills, like BBH, confirming its ability to dynamically allocate computational resources in alignment with the input’s complexity.Our findings also highlight a variation in the number of experts needed across different layers of the transformer model, offering insights into the potential for designing heterogeneous MoE frameworks. The code and models are available at https://github.com/ZhenweiAn/Dynamic_MoE.
2016
pdf
bib
A Preliminary Study of Disputation Behavior in Online Debating Forum
Zhongyu Wei
|
Yandi Xia
|
Chen Li
|
Yang Liu
|
Zachary Stallbohm
|
Yi Li
|
Yang Jin
Proceedings of the Third Workshop on Argument Mining (ArgMining2016)
pdf
bib
abs
Using Relevant Public Posts to Enhance News Article Summarization
Chen Li
|
Zhongyu Wei
|
Yang Liu
|
Yang Jin
|
Fei Huang
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
A news article summary usually consists of 2-3 key sentences that reflect the gist of that news article. In this paper we explore using public posts following a new article to improve automatic summary generation for the news article. We propose different approaches to incorporate information from public posts, including using frequency information from the posts to re-estimate bigram weights in the ILP-based summarization model and to re-weight a dependency tree edge’s importance for sentence compression, directly selecting sentences from posts as the final summary, and finally a strategy to combine the summarization results generated from news articles and posts. Our experiments on data collected from Facebook show that relevant public posts provide useful information and can be effectively leveraged to improve news article summarization results.
2006
pdf
bib
Human Gene Name Normalization using Text Matching with Automatically Extracted Synonym Dictionaries
Haw-ren Fang
|
Kevin Murphy
|
Yang Jin
|
Jessica Kim
|
Peter White
Proceedings of the HLT-NAACL BioNLP Workshop on Linking Natural Language and Biology
2005
pdf
bib
Simple Algorithms for Complex Relation Extraction with Applications to Biomedical IE
Ryan McDonald
|
Fernando Pereira
|
Seth Kulick
|
Scott Winters
|
Yang Jin
|
Pete White
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)