Extracting Latent Steering Vectors from Pretrained Language Models

Nishant Subramani, Nivedita Suresh, Matthew Peters


Abstract
Prior work on controllable text generation has focused on learning how to control language models through trainable decoding, smart-prompt design, or fine-tuning based on a desired objective. We hypothesize that the information needed to steer the model to generate a target sentence is already encoded within the model. Accordingly, we explore a different approach altogether: extracting latent vectors directly from pretrained language model decoders without fine-tuning. Experiments show that there exist steering vectors, which, when added to the hidden states of the language model, generate a target sentence nearly perfectly (> 99 BLEU) for English sentences from a variety of domains. We show that vector arithmetic can be used for unsupervised sentiment transfer on the Yelp sentiment benchmark, with performance comparable to models tailored to this task. We find that distances between steering vectors reflect sentence similarity when evaluated on a textual similarity benchmark (STS-B), outperforming pooled hidden states of models. Finally, we present an analysis of the intrinsic properties of the steering vectors. Taken together, our results suggest that frozen LMs can be effectively controlled through their latent steering space.
Anthology ID:
2022.findings-acl.48
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
566–581
Language:
URL:
https://aclanthology.org/2022.findings-acl.48
DOI:
10.18653/v1/2022.findings-acl.48
Bibkey:
Cite (ACL):
Nishant Subramani, Nivedita Suresh, and Matthew Peters. 2022. Extracting Latent Steering Vectors from Pretrained Language Models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 566–581, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Extracting Latent Steering Vectors from Pretrained Language Models (Subramani et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.48.pdf
Video:
 https://aclanthology.org/2022.findings-acl.48.mp4
Code
 nishantsubramani/steering_vectors
Data
StylePTB