LLM Circuit Analyses Are Consistent Across Training and Scale

Curt Tigges, Michael Hanna, Qinan Yu, Stella Biderman


Abstract
Most currently deployed large language models (LLMs) undergo continuous training or additional finetuning. By contrast, most research into LLMs’ internal mechanisms focuses on models at one snapshot in time (the end of pre-training), raising the question of whether their results generalize to real-world settings. Existing studies of mechanisms over time focus on encoder-only or toy models, which differ significantly from most deployed models. In this study, we track how model mechanisms, operationalized as circuits, emerge and evolve across 300 billion tokens of training in decoder-only LLMs, in models ranging from 70 million to 2.8 billion parameters. We find that task abilities and the functional components that support them emerge consistently at similar token counts across scale. Moreover, although such components may be implemented by different attention heads over time, the overarching algorithm that they implement remains. Surprisingly, both these algorithms and the types of components involved therein tend to replicate across model scale. Finally, we find that circuit size correlates with model size and can fluctuate considerably over time even when the same algorithm is implemented. These results suggest that circuit analyses conducted on small models at the end of pre-training can provide insights that still apply after additional training and over model scale.
Anthology ID:
2024.repl4nlp-1.22
Volume:
Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Chen Zhao, Marius Mosbach, Pepa Atanasova, Seraphina Goldfarb-Tarrent, Peter Hase, Arian Hosseini, Maha Elbayad, Sandro Pezzelle, Maximilian Mozes
Venues:
RepL4NLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
290–303
Language:
URL:
https://aclanthology.org/2024.repl4nlp-1.22
DOI:
Bibkey:
Cite (ACL):
Curt Tigges, Michael Hanna, Qinan Yu, and Stella Biderman. 2024. LLM Circuit Analyses Are Consistent Across Training and Scale. In Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024), pages 290–303, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
LLM Circuit Analyses Are Consistent Across Training and Scale (Tigges et al., RepL4NLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.repl4nlp-1.22.pdf