Disentangling Mathematical Reasoning in LLMs: A Methodological Investigation of Internal Mechanisms

Tanja Baeumel, Josef van Genabith, Simon Ostermann


Abstract
Large language models (LLMs) have demonstrated impressive capabilities, yet their internal mechanisms for handling reasoning-intensive tasks remain underexplored. To advance the understanding of model-internal processing mechanisms, we present an investigation of how LLMs perform arithmetic operations by examining internal mechanisms during task execution. Using early decoding, we trace how next token predictions are constructed across layers. Our experiments reveal that while the models recognize arithmetic tasks early, correct result generation occurs only in the final layers. Notably, models proficient in arithmetic exhibit a clear division of labor between attention and MLP modules, where attention propagates input information and MLP modules aggregate it. This division is absent in less proficient models. Furthermore, successful models appear to process more challenging arithmetic tasks functionally, suggesting reasoning capabilities beyond factual recall.
Anthology ID:
2025.mathnlp-main.16
Volume:
Proceedings of The 3rd Workshop on Mathematical Natural Language Processing (MathNLP 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Marco Valentino, Deborah Ferreira, Mokanarangan Thayaparan, Leonardo Ranaldi, Andre Freitas
Venues:
MathNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
203–217
Language:
URL:
https://aclanthology.org/2025.mathnlp-main.16/
DOI:
10.18653/v1/2025.mathnlp-main.16
Bibkey:
Cite (ACL):
Tanja Baeumel, Josef van Genabith, and Simon Ostermann. 2025. Disentangling Mathematical Reasoning in LLMs: A Methodological Investigation of Internal Mechanisms. In Proceedings of The 3rd Workshop on Mathematical Natural Language Processing (MathNLP 2025), pages 203–217, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Disentangling Mathematical Reasoning in LLMs: A Methodological Investigation of Internal Mechanisms (Baeumel et al., MathNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.mathnlp-main.16.pdf