LLMs learn governing principles of dynamical systems, revealing an in-context neural scaling law

Toni J.b. Liu, Nicolas Boulle, Raphaël Sarfati, Christopher Earls


Abstract
We study LLMs’ ability to extrapolate the behavior of various dynamical systems, including stochastic, chaotic, continuous, and discrete systems, whose evolution is governed by principles of physical interest. Our results show that LLaMA-2, a language model trained on text, achieves accurate predictions of dynamical system time series without fine-tuning or prompt engineering. Moreover, the accuracy of the learned physical rules increases with the length of the input context window, revealing an in-context version of a neural scaling law. Along the way, we present a flexible and efficient algorithm for extracting probability density functions of multi-digit numbers directly from LLMs.
Anthology ID:
2024.emnlp-main.842
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15097–15117
Language:
URL:
https://aclanthology.org/2024.emnlp-main.842/
DOI:
10.18653/v1/2024.emnlp-main.842
Bibkey:
Cite (ACL):
Toni J.b. Liu, Nicolas Boulle, Raphaël Sarfati, and Christopher Earls. 2024. LLMs learn governing principles of dynamical systems, revealing an in-context neural scaling law. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15097–15117, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
LLMs learn governing principles of dynamical systems, revealing an in-context neural scaling law (Liu et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.842.pdf