Householder Pseudo-Rotation: A Novel Approach to Activation Editing in LLMs with Direction-Magnitude Perspective

Van-Cuong Pham, Thien Huu Nguyen


Abstract
Activation Editing, which involves directly editting the internal representations of large language models (LLMs) to alter their behavior and achieve desired properties, has emerged as a promising area of research. Existing works primarily treat LLMs’ activations as points in space and modify them by adding steering vectors. We show that doing so would break the magnitude consistency of the activation vectors in LLMs. To overcome this shortcoming, we propose a novel editing method that views activations in terms of their directions and magnitudes. Our method, which we name Householder Pseudo-Rotation (HPR), mimics the rotation transformation, thus preserving activation norm and resulting in an improved performance on various safety benchmarks.
Anthology ID:
2024.emnlp-main.761
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13737–13751
Language:
URL:
https://aclanthology.org/2024.emnlp-main.761
DOI:
10.18653/v1/2024.emnlp-main.761
Bibkey:
Cite (ACL):
Van-Cuong Pham and Thien Huu Nguyen. 2024. Householder Pseudo-Rotation: A Novel Approach to Activation Editing in LLMs with Direction-Magnitude Perspective. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 13737–13751, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Householder Pseudo-Rotation: A Novel Approach to Activation Editing in LLMs with Direction-Magnitude Perspective (Pham & Nguyen, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.761.pdf
Data:
 2024.emnlp-main.761.data.zip