Large Language Models Relearn Removed Concepts

Michelle Lo, Fazl Barez, Shay Cohen


Abstract
Advances in model editing through neuron pruning hold promise for removing undesirable concepts from large language models. However, it remains unclear whether models have the capacity to reacquire pruned concepts after editing. To investigate this, we evaluate concept relearning in models by tracking concept saliency and similarity in pruned neurons during retraining for named entity recognition tasks. Our findings reveal that models can quickly regain performance post-pruning by relocating advanced concepts to earlier layers and reallocating pruned concepts to primed neurons with similar semantics. This suggests that models exhibit polysemantic capacities and can blend old and new concepts in individual neurons. While neuron pruning provides interpretability into model concepts, our results highlight the challenges of permanent concept removal for improved model *safety*. Monitoring concept reemergence and developing techniques to mitigate relearning of unsafe concepts will be important directions for more robust model editing. Overall, our work strongly demonstrates the resilience and fluidity of concept representations in LLMs post concept removal.
Anthology ID:
2024.findings-acl.492
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8306–8323
Language:
URL:
https://aclanthology.org/2024.findings-acl.492
DOI:
10.18653/v1/2024.findings-acl.492
Bibkey:
Cite (ACL):
Michelle Lo, Fazl Barez, and Shay Cohen. 2024. Large Language Models Relearn Removed Concepts. In Findings of the Association for Computational Linguistics: ACL 2024, pages 8306–8323, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Large Language Models Relearn Removed Concepts (Lo et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.492.pdf