Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks

Ting-Yun Chang, Jesse Thomason, Robin Jia


Abstract
The concept of localization in LLMs is often mentioned in prior work; however, methods for localization have never been systematically and directly evaluated. We propose two complementary benchmarks that evaluate the ability of localization methods to pinpoint LLM components responsible for memorized data. In our INJ benchmark, we actively inject a piece of new information into a small subset of LLM weights, enabling us to directly evaluate whether localization methods can identify these “ground truth” weights. In our DEL benchmark, we evaluate localization by measuring how much dropping out identified neurons deletes a memorized pretrained sequence. Despite their different perspectives, our two benchmarks yield consistent rankings of five localization methods. Methods adapted from network pruning perform well on both benchmarks, and all evaluated methods show promising localization ability. On the other hand, even successful methods identify neurons that are not specific to a single memorized sequence.
Anthology ID:
2024.naacl-long.176
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3190–3211
Language:
URL:
https://aclanthology.org/2024.naacl-long.176
DOI:
10.18653/v1/2024.naacl-long.176
Bibkey:
Cite (ACL):
Ting-Yun Chang, Jesse Thomason, and Robin Jia. 2024. Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3190–3211, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks (Chang et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.176.pdf