Part-Of-Speech Sensitivity of Routers in Mixture of Experts Models

Elie Antoine, Frederic Bechet, Phillippe Langlais


Abstract
This study investigates the behavior of model-integrated routers in Mixture of Experts (MoE) models, focusing on how tokens are routed based on their linguistic features, specifically Part-of-Speech (POS) tags. The goal is to explore across different MoE architectures whether experts specialize in processing tokens with similar linguistic traits. By analyzing token trajectories across experts and layers, we aim to uncover how MoE models handle linguistic information. Findings from six popular MoE models reveal expert specialization for specific POS categories, with routing paths showing high predictive accuracy for POS, highlighting the value of routing paths in characterizing tokens.
Anthology ID:
2025.coling-main.431
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6467–6474
Language:
URL:
https://aclanthology.org/2025.coling-main.431/
DOI:
Bibkey:
Cite (ACL):
Elie Antoine, Frederic Bechet, and Phillippe Langlais. 2025. Part-Of-Speech Sensitivity of Routers in Mixture of Experts Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 6467–6474, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Part-Of-Speech Sensitivity of Routers in Mixture of Experts Models (Antoine et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.431.pdf