Language Models Encode the Value of Numbers Linearly

Fangwei Zhu, Damai Dai, Zhifang Sui


Abstract
Large language models (LLMs) have exhibited impressive competence in various tasks, but their internal mechanisms on mathematical problems are still under-explored. In this paper, we study a fundamental question: how language models encode the value of numbers, a basic element in math. To study the question, we construct a synthetic dataset comprising addition problems and utilize linear probes to read out input numbers from the hidden states. Experimental results support the existence of encoded number values in LLMs on different layers, and these values can be extracted via linear probes. Further experiments show that LLMs store their calculation results in a similar manner, and we can intervene the output via simple vector additions, proving the causal connection between encoded numbers and language model outputs. Our research provides evidence that LLMs encode the value of numbers linearly, offering insights for better exploring, designing, and utilizing numeric information in LLMs.
Anthology ID:
2025.coling-main.47
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
693–709
Language:
URL:
https://aclanthology.org/2025.coling-main.47/
DOI:
Bibkey:
Cite (ACL):
Fangwei Zhu, Damai Dai, and Zhifang Sui. 2025. Language Models Encode the Value of Numbers Linearly. In Proceedings of the 31st International Conference on Computational Linguistics, pages 693–709, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Language Models Encode the Value of Numbers Linearly (Zhu et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.47.pdf