Thanh-Nam Doan


2023

pdf bib
HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts
Truong Giang Do | Le Khiem | Quang Pham | TrungTin Nguyen | Thanh-Nam Doan | Binh Nguyen | Chenghao Liu | Savitha Ramasamy | Xiaoli Li | Steven Hoi
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models. Recent findings suggest that fixing the routers can achieve competitive performance by alleviating the collapsing problem, where all experts eventually learn similar representations. However, this strategy has two key limitations: (i) the policy derived from random routers might be sub-optimal, and (ii) it requires extensive resources during training and evaluation, leading to limited efficiency gains. This work introduces HyperRouter, which dynamically generates the router’s parameters through a fixed hypernetwork and trainable embeddings to achieve a balance between training the routers and freezing them to learn an improved routing policy. Extensive experiments across a wide range of tasks demonstrate the superior performance and efficiency gains of HyperRouter compared to existing routing methods. Our implementation is publicly available at https://github.com/giangdip2410/HyperRouter.

2021

pdf bib
Benchmarking Neural Topic Models: An Empirical Study
Thanh-Nam Doan | Tuan-Anh Hoang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021