HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts

Hao Zhao, Zihan Qiu, Huijia Wu, Zili Wang, Zhaofeng He, Jie Fu


Abstract
The Mixture of Experts (MoE) for language models has been proven effective in augmenting the capacity of models by dynamically routing each input token to a specific subset of experts for processing. Despite the success, most existing methods face a challenge for balance between sparsity and the availability of expert knowledge: enhancing performance through increased use of expert knowledge often results in diminishing sparsity during expert selection. To mitigate this contradiction, we propose HyperMoE, a novel MoE framework built upon Hypernetworks. This framework integrates the computational processes of MoE with the concept of knowledge transferring in multi-task learning. Specific modules generated based on the information of unselected experts serve as supplementary information, which allows the knowledge of experts not selected to be used while maintaining selection sparsity. Our comprehensive empirical evaluations across multiple datasets and backbones establish that HyperMoE significantly outperforms existing MoE methods under identical conditions concerning the number of experts. Our code is publicly available at https://github.com/Bumble666/Hyper_MoE
Anthology ID:
2024.acl-long.571
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10605–10618
Language:
URL:
https://aclanthology.org/2024.acl-long.571
DOI:
10.18653/v1/2024.acl-long.571
Bibkey:
Cite (ACL):
Hao Zhao, Zihan Qiu, Huijia Wu, Zili Wang, Zhaofeng He, and Jie Fu. 2024. HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10605–10618, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts (Zhao et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.571.pdf