Neuron-Level Knowledge Attribution in Large Language Models

Zeping Yu, Sophia Ananiadou


Abstract
Identifying important neurons for final predictions is essential for understanding the mechanisms of large language models. Due to computational constraints, current attribution techniques struggle to operate at neuron level. In this paper, we propose a static method for pinpointing significant neurons. Compared to seven other methods, our approach demonstrates superior performance across three metrics. Additionally, since most static methods typically only identify “value neurons” directly contributing to the final prediction, we propose a method for identifying “query neurons” which activate these “value neurons”. Finally, we apply our methods to analyze six types of knowledge across both attention and feed-forward network (FFN) layers. Our method and analysis are helpful for understanding the mechanisms of knowledge storage and set the stage for future research in knowledge editing. The code is available on https://github.com/zepingyu0512/neuron-attribution.
Anthology ID:
2024.emnlp-main.191
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3267–3280
Language:
URL:
https://aclanthology.org/2024.emnlp-main.191
DOI:
Bibkey:
Cite (ACL):
Zeping Yu and Sophia Ananiadou. 2024. Neuron-Level Knowledge Attribution in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3267–3280, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Neuron-Level Knowledge Attribution in Large Language Models (Yu & Ananiadou, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.191.pdf